Nov 27 17:09:36 crc systemd[1]: Starting Kubernetes Kubelet... Nov 27 17:09:36 crc restorecon[4683]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:36 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 27 17:09:37 crc restorecon[4683]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 27 17:09:37 crc restorecon[4683]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 27 17:09:38 crc kubenswrapper[4792]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 27 17:09:38 crc kubenswrapper[4792]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 27 17:09:38 crc kubenswrapper[4792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 27 17:09:38 crc kubenswrapper[4792]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 27 17:09:38 crc kubenswrapper[4792]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 27 17:09:38 crc kubenswrapper[4792]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.317047 4792 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322403 4792 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322432 4792 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322443 4792 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322451 4792 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322460 4792 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322472 4792 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322483 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322492 4792 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322500 4792 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322508 4792 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322516 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322524 4792 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322532 4792 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322540 4792 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322548 4792 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322555 4792 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322563 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322571 4792 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322578 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322586 4792 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322593 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322601 4792 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322611 4792 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322634 4792 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322667 4792 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322677 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322688 4792 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322697 4792 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322706 4792 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322714 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322722 4792 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322729 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322737 4792 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322745 4792 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322753 4792 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322763 4792 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322770 4792 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322797 4792 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322805 4792 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322815 4792 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322823 4792 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322833 4792 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322842 4792 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322851 4792 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322859 4792 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322867 4792 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322874 4792 feature_gate.go:330] unrecognized feature gate: Example Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322881 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322889 4792 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322897 4792 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322905 4792 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322913 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322920 4792 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322928 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322936 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322943 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322951 4792 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322958 4792 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322966 4792 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322974 4792 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322983 4792 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322991 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.322999 4792 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.323006 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.323013 4792 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.323021 4792 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.323029 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.323037 4792 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.323044 4792 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.323052 4792 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.323060 4792 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325446 4792 flags.go:64] FLAG: --address="0.0.0.0" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325469 4792 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325486 4792 flags.go:64] FLAG: --anonymous-auth="true" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325497 4792 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325509 4792 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325530 4792 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325542 4792 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325553 4792 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325562 4792 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325572 4792 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325581 4792 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325591 4792 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325600 4792 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325609 4792 flags.go:64] FLAG: --cgroup-root="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325617 4792 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325626 4792 flags.go:64] FLAG: --client-ca-file="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325635 4792 flags.go:64] FLAG: --cloud-config="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325670 4792 flags.go:64] FLAG: --cloud-provider="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325680 4792 flags.go:64] FLAG: --cluster-dns="[]" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325690 4792 flags.go:64] FLAG: --cluster-domain="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325699 4792 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325708 4792 flags.go:64] FLAG: --config-dir="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325717 4792 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325726 4792 flags.go:64] FLAG: --container-log-max-files="5" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325737 4792 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325746 4792 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325757 4792 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325767 4792 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325775 4792 flags.go:64] FLAG: --contention-profiling="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325784 4792 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325793 4792 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325803 4792 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325812 4792 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325823 4792 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325833 4792 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325842 4792 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325851 4792 flags.go:64] FLAG: --enable-load-reader="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325860 4792 flags.go:64] FLAG: --enable-server="true" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325869 4792 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325880 4792 flags.go:64] FLAG: --event-burst="100" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325890 4792 flags.go:64] FLAG: --event-qps="50" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325899 4792 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325908 4792 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325918 4792 flags.go:64] FLAG: --eviction-hard="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325938 4792 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325947 4792 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325956 4792 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325965 4792 flags.go:64] FLAG: --eviction-soft="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325975 4792 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325984 4792 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.325993 4792 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326003 4792 flags.go:64] FLAG: --experimental-mounter-path="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326012 4792 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326020 4792 flags.go:64] FLAG: --fail-swap-on="true" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326029 4792 flags.go:64] FLAG: --feature-gates="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326040 4792 flags.go:64] FLAG: --file-check-frequency="20s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326049 4792 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326059 4792 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326068 4792 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326078 4792 flags.go:64] FLAG: --healthz-port="10248" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326087 4792 flags.go:64] FLAG: --help="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326095 4792 flags.go:64] FLAG: --hostname-override="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326104 4792 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326114 4792 flags.go:64] FLAG: --http-check-frequency="20s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326123 4792 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326132 4792 flags.go:64] FLAG: --image-credential-provider-config="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326141 4792 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326150 4792 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326159 4792 flags.go:64] FLAG: --image-service-endpoint="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326167 4792 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326176 4792 flags.go:64] FLAG: --kube-api-burst="100" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326185 4792 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326195 4792 flags.go:64] FLAG: --kube-api-qps="50" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326203 4792 flags.go:64] FLAG: --kube-reserved="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326212 4792 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326220 4792 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326229 4792 flags.go:64] FLAG: --kubelet-cgroups="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326238 4792 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326247 4792 flags.go:64] FLAG: --lock-file="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326256 4792 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326266 4792 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326275 4792 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326288 4792 flags.go:64] FLAG: --log-json-split-stream="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326297 4792 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326306 4792 flags.go:64] FLAG: --log-text-split-stream="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326314 4792 flags.go:64] FLAG: --logging-format="text" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326323 4792 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326333 4792 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326342 4792 flags.go:64] FLAG: --manifest-url="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326351 4792 flags.go:64] FLAG: --manifest-url-header="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326362 4792 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326371 4792 flags.go:64] FLAG: --max-open-files="1000000" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326382 4792 flags.go:64] FLAG: --max-pods="110" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326391 4792 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326400 4792 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326409 4792 flags.go:64] FLAG: --memory-manager-policy="None" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326425 4792 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326434 4792 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326443 4792 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326453 4792 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326472 4792 flags.go:64] FLAG: --node-status-max-images="50" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326481 4792 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326490 4792 flags.go:64] FLAG: --oom-score-adj="-999" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326500 4792 flags.go:64] FLAG: --pod-cidr="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326509 4792 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326522 4792 flags.go:64] FLAG: --pod-manifest-path="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326531 4792 flags.go:64] FLAG: --pod-max-pids="-1" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326540 4792 flags.go:64] FLAG: --pods-per-core="0" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326549 4792 flags.go:64] FLAG: --port="10250" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326558 4792 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326567 4792 flags.go:64] FLAG: --provider-id="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326576 4792 flags.go:64] FLAG: --qos-reserved="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326585 4792 flags.go:64] FLAG: --read-only-port="10255" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326594 4792 flags.go:64] FLAG: --register-node="true" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326603 4792 flags.go:64] FLAG: --register-schedulable="true" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326613 4792 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326627 4792 flags.go:64] FLAG: --registry-burst="10" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326636 4792 flags.go:64] FLAG: --registry-qps="5" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326669 4792 flags.go:64] FLAG: --reserved-cpus="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326679 4792 flags.go:64] FLAG: --reserved-memory="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326690 4792 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326699 4792 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326708 4792 flags.go:64] FLAG: --rotate-certificates="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326717 4792 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326726 4792 flags.go:64] FLAG: --runonce="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326735 4792 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326744 4792 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326786 4792 flags.go:64] FLAG: --seccomp-default="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326795 4792 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326808 4792 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326817 4792 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326826 4792 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326835 4792 flags.go:64] FLAG: --storage-driver-password="root" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326844 4792 flags.go:64] FLAG: --storage-driver-secure="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326853 4792 flags.go:64] FLAG: --storage-driver-table="stats" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326862 4792 flags.go:64] FLAG: --storage-driver-user="root" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326871 4792 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326880 4792 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326889 4792 flags.go:64] FLAG: --system-cgroups="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326898 4792 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326912 4792 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326921 4792 flags.go:64] FLAG: --tls-cert-file="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326930 4792 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326941 4792 flags.go:64] FLAG: --tls-min-version="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326951 4792 flags.go:64] FLAG: --tls-private-key-file="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326960 4792 flags.go:64] FLAG: --topology-manager-policy="none" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326969 4792 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326978 4792 flags.go:64] FLAG: --topology-manager-scope="container" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326987 4792 flags.go:64] FLAG: --v="2" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.326997 4792 flags.go:64] FLAG: --version="false" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.327009 4792 flags.go:64] FLAG: --vmodule="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.327020 4792 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.327030 4792 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327232 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327243 4792 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327253 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327263 4792 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327273 4792 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327281 4792 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327289 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327297 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327307 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327315 4792 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327322 4792 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327330 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327338 4792 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327346 4792 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327353 4792 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327364 4792 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327374 4792 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327383 4792 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327391 4792 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327400 4792 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327409 4792 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327419 4792 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327427 4792 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327435 4792 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327442 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327450 4792 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327457 4792 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327465 4792 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327473 4792 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327480 4792 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327488 4792 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327495 4792 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327503 4792 feature_gate.go:330] unrecognized feature gate: Example Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327511 4792 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327520 4792 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327527 4792 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327535 4792 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327543 4792 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327553 4792 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327563 4792 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327576 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327586 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327595 4792 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327605 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327615 4792 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327625 4792 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327635 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327679 4792 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327693 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327705 4792 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327714 4792 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327724 4792 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327733 4792 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327741 4792 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327749 4792 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327758 4792 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327766 4792 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327773 4792 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327781 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327788 4792 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327796 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327804 4792 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327811 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327819 4792 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327826 4792 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327834 4792 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327842 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327852 4792 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327862 4792 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327870 4792 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.327880 4792 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.328684 4792 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.344391 4792 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.344432 4792 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344549 4792 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344561 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344571 4792 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344581 4792 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344591 4792 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344598 4792 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344606 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344614 4792 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344622 4792 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344630 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344676 4792 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344692 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344704 4792 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344713 4792 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344721 4792 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344730 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344737 4792 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344745 4792 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344753 4792 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344761 4792 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344769 4792 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344777 4792 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344784 4792 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344792 4792 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344800 4792 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344809 4792 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344817 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344827 4792 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344837 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344846 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344854 4792 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344862 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344870 4792 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344877 4792 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344885 4792 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344893 4792 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344900 4792 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344908 4792 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344918 4792 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344928 4792 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344936 4792 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344944 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344951 4792 feature_gate.go:330] unrecognized feature gate: Example Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344960 4792 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344971 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344980 4792 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344990 4792 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.344999 4792 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345009 4792 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345018 4792 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345027 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345037 4792 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345046 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345055 4792 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345065 4792 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345074 4792 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345083 4792 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345095 4792 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345107 4792 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345118 4792 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345127 4792 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345136 4792 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345144 4792 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345151 4792 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345159 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345166 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345175 4792 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345182 4792 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345190 4792 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345197 4792 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345204 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.345217 4792 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345434 4792 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345445 4792 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345455 4792 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345463 4792 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345472 4792 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345481 4792 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345489 4792 feature_gate.go:330] unrecognized feature gate: Example Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345497 4792 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345504 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345512 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345521 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345528 4792 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345536 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345543 4792 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345551 4792 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345559 4792 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345566 4792 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345573 4792 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345581 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345589 4792 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345596 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345604 4792 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345614 4792 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345623 4792 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345632 4792 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345697 4792 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345708 4792 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345717 4792 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345727 4792 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345737 4792 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345745 4792 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345753 4792 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345763 4792 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345771 4792 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345779 4792 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345787 4792 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345796 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345805 4792 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345813 4792 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345821 4792 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345832 4792 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345841 4792 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345852 4792 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345860 4792 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345870 4792 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345879 4792 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345887 4792 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345895 4792 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345904 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345912 4792 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345920 4792 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345929 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345936 4792 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345944 4792 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345952 4792 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345960 4792 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345969 4792 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345977 4792 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345984 4792 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.345992 4792 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.346000 4792 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.346009 4792 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.346017 4792 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.346025 4792 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.346032 4792 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.346040 4792 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.346048 4792 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.346055 4792 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.346064 4792 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.346074 4792 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.346083 4792 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.346099 4792 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.348471 4792 server.go:940] "Client rotation is on, will bootstrap in background" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.359746 4792 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.359907 4792 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.363876 4792 server.go:997] "Starting client certificate rotation" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.363933 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.365067 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-06 09:37:53.574883345 +0000 UTC Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.365180 4792 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 208h28m15.209709516s for next certificate rotation Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.407474 4792 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.413283 4792 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.447732 4792 log.go:25] "Validated CRI v1 runtime API" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.517236 4792 log.go:25] "Validated CRI v1 image API" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.519973 4792 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.532511 4792 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-27-17-04-59-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.532568 4792 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.559271 4792 manager.go:217] Machine: {Timestamp:2025-11-27 17:09:38.556878525 +0000 UTC m=+0.899704923 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:065c870f-af17-4bc7-a39f-66fec82f3422 BootID:0c8b1357-53ae-4c5b-a3c1-1529964132d7 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5e:f1:32 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5e:f1:32 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d0:8d:f6 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:13:82:6d Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d6:60:e1 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f5:74:4d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ea:a9:4a:e3:f5:ba Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5a:00:94:b5:fb:7c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.559672 4792 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.559883 4792 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.560320 4792 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.560676 4792 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.560733 4792 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.562982 4792 topology_manager.go:138] "Creating topology manager with none policy" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.563023 4792 container_manager_linux.go:303] "Creating device plugin manager" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.568871 4792 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.568911 4792 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.569636 4792 state_mem.go:36] "Initialized new in-memory state store" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.569850 4792 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.574360 4792 kubelet.go:418] "Attempting to sync node with API server" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.574397 4792 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.574428 4792 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.574454 4792 kubelet.go:324] "Adding apiserver pod source" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.574474 4792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.590621 4792 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.592499 4792 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.594576 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.594580 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 27 17:09:38 crc kubenswrapper[4792]: E1127 17:09:38.594718 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 27 17:09:38 crc kubenswrapper[4792]: E1127 17:09:38.594745 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.596405 4792 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.598219 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.598264 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.598281 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.598295 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.598317 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.598331 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.598344 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.598365 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.598380 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.598393 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.598412 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.598425 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.599598 4792 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.600341 4792 server.go:1280] "Started kubelet" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.600611 4792 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.600813 4792 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.602049 4792 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.602102 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 27 17:09:38 crc systemd[1]: Started Kubernetes Kubelet. Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.604840 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.604900 4792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.611733 4792 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.611806 4792 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.612194 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:54:28.329636432 +0000 UTC Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.612699 4792 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 27 17:09:38 crc kubenswrapper[4792]: E1127 17:09:38.612879 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.614577 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 27 17:09:38 crc kubenswrapper[4792]: E1127 17:09:38.614722 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.615330 4792 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.615364 4792 factory.go:55] Registering systemd factory Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.615379 4792 factory.go:221] Registration of the systemd container factory successfully Nov 27 17:09:38 crc kubenswrapper[4792]: E1127 17:09:38.613486 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="200ms" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.615858 4792 server.go:460] "Adding debug handlers to kubelet server" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.615887 4792 factory.go:153] Registering CRI-O factory Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.615923 4792 factory.go:221] Registration of the crio container factory successfully Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.615956 4792 factory.go:103] Registering Raw factory Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.615981 4792 manager.go:1196] Started watching for new ooms in manager Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.619967 4792 manager.go:319] Starting recovery of all containers Nov 27 17:09:38 crc kubenswrapper[4792]: E1127 17:09:38.629962 4792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.214:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187bec3203b38409 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-27 17:09:38.600297481 +0000 UTC m=+0.943123839,LastTimestamp:2025-11-27 17:09:38.600297481 +0000 UTC m=+0.943123839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638115 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638208 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638228 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638244 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638260 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638277 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638291 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638316 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638334 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638352 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638365 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638383 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638399 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638418 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638434 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638448 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638461 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638476 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638495 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638521 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638545 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638565 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638591 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638614 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638636 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638682 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638703 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638718 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638760 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638777 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638794 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638811 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638830 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638851 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638892 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638910 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638925 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638941 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638956 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638972 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.638987 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639002 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639019 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639033 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639049 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639063 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639081 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639096 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639112 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639131 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639149 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639164 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639184 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639202 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639280 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639306 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639332 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639354 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639377 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639397 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639417 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639437 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639456 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639475 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639489 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639506 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639520 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639534 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639550 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639564 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639578 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639591 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639604 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639619 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639633 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639676 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639696 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639717 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639741 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639762 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639780 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639802 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639821 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639840 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639858 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639877 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639894 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639911 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639933 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639951 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639967 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.639987 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640006 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640021 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640041 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640058 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640076 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640094 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640112 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640132 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640151 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640169 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640188 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640206 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640235 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640255 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640277 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640321 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640344 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640369 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.640394 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642415 4792 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642459 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642485 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642540 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642565 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642588 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642609 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642629 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642678 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642698 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642721 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642738 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642762 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642779 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642798 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642821 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642839 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642858 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642877 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642892 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642907 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642921 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642935 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642951 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642968 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.642985 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643007 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643047 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643066 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643085 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643104 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643123 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643143 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643161 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643182 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643203 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643224 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643247 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643277 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643297 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643317 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643333 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643351 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643371 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643390 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643407 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643426 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643453 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643472 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643491 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643510 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643528 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643546 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643564 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643588 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643606 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643625 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643707 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643735 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643755 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643775 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643797 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643819 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643842 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643864 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643884 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643904 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643926 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643948 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643967 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.643995 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644018 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644040 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644061 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644079 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644098 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644118 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644138 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644160 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644185 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644208 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644228 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644248 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644267 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644286 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644308 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644331 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644354 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644375 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644397 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644419 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644478 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644500 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644521 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644544 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644567 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644591 4792 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644609 4792 reconstruct.go:97] "Volume reconstruction finished" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.644623 4792 reconciler.go:26] "Reconciler: start to sync state" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.655990 4792 manager.go:324] Recovery completed Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.673213 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.674822 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.674879 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.674899 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.675850 4792 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.675879 4792 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.675908 4792 state_mem.go:36] "Initialized new in-memory state store" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.682919 4792 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.685329 4792 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.685411 4792 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.685454 4792 kubelet.go:2335] "Starting kubelet main sync loop" Nov 27 17:09:38 crc kubenswrapper[4792]: E1127 17:09:38.685696 4792 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 27 17:09:38 crc kubenswrapper[4792]: W1127 17:09:38.692287 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 27 17:09:38 crc kubenswrapper[4792]: E1127 17:09:38.692404 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 27 17:09:38 crc kubenswrapper[4792]: E1127 17:09:38.713910 4792 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.715521 4792 policy_none.go:49] "None policy: Start" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.716910 4792 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.716997 4792 state_mem.go:35] "Initializing new in-memory state store" Nov 27 17:09:38 crc kubenswrapper[4792]: E1127 17:09:38.786232 4792 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.812201 4792 manager.go:334] "Starting Device Plugin manager" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.812275 4792 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.812295 4792 server.go:79] "Starting device plugin registration server" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.812924 4792 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.812947 4792 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.813607 4792 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.813820 4792 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.813837 4792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 27 17:09:38 crc kubenswrapper[4792]: E1127 17:09:38.816730 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="400ms" Nov 27 17:09:38 crc kubenswrapper[4792]: E1127 17:09:38.821581 4792 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.913953 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.915966 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.916010 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.916026 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.916056 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 27 17:09:38 crc kubenswrapper[4792]: E1127 17:09:38.916563 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.214:6443: connect: connection refused" node="crc" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.987401 4792 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.987563 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.989275 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.989329 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.989341 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.989516 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.990006 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.990140 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.990449 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.990514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.990538 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.990725 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.990842 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.990885 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.991085 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.991112 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.991121 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.992301 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.992731 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.992755 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.992337 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.992818 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.992833 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.992891 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.993054 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.993108 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.994060 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.994147 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.994163 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.994117 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.994214 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.994227 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.994341 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.994515 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.994568 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.995114 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.995152 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.995163 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.995389 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.995433 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.995467 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.995494 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.995510 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.996041 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.996079 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:38 crc kubenswrapper[4792]: I1127 17:09:38.996099 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.050213 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.050304 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.050368 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.050467 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.050532 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.050566 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.050701 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.050775 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.050835 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.050865 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.050926 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.050955 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.051014 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.051053 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.051124 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.117159 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.118270 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.118303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.118313 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.118334 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 27 17:09:39 crc kubenswrapper[4792]: E1127 17:09:39.118772 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.214:6443: connect: connection refused" node="crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152115 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152177 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152211 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152241 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152271 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152308 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152361 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152377 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152379 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152427 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152480 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152494 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152501 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152536 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152559 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152579 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152590 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152597 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152622 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152628 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152669 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152684 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152688 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152698 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152716 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152738 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152761 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152780 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152802 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.152824 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: E1127 17:09:39.218277 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="800ms" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.336457 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.354468 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.366541 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.385810 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.399767 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 27 17:09:39 crc kubenswrapper[4792]: W1127 17:09:39.401967 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-36c3da025c92870b371cb6fcc004a3fec6729fb5938029cf0595fc13fd12bdd2 WatchSource:0}: Error finding container 36c3da025c92870b371cb6fcc004a3fec6729fb5938029cf0595fc13fd12bdd2: Status 404 returned error can't find the container with id 36c3da025c92870b371cb6fcc004a3fec6729fb5938029cf0595fc13fd12bdd2 Nov 27 17:09:39 crc kubenswrapper[4792]: W1127 17:09:39.403565 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-0396aace4c2abee8e9ee87075e989eaf855f28e02ef61b189cf75efecb3c1547 WatchSource:0}: Error finding container 0396aace4c2abee8e9ee87075e989eaf855f28e02ef61b189cf75efecb3c1547: Status 404 returned error can't find the container with id 0396aace4c2abee8e9ee87075e989eaf855f28e02ef61b189cf75efecb3c1547 Nov 27 17:09:39 crc kubenswrapper[4792]: W1127 17:09:39.408180 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 27 17:09:39 crc kubenswrapper[4792]: E1127 17:09:39.408265 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 27 17:09:39 crc kubenswrapper[4792]: W1127 17:09:39.408433 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-3c45aa21bd58a265147990a9ea846d082ebb2352985cffee3f087bf1076ae4e4 WatchSource:0}: Error finding container 3c45aa21bd58a265147990a9ea846d082ebb2352985cffee3f087bf1076ae4e4: Status 404 returned error can't find the container with id 3c45aa21bd58a265147990a9ea846d082ebb2352985cffee3f087bf1076ae4e4 Nov 27 17:09:39 crc kubenswrapper[4792]: W1127 17:09:39.415681 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-cc519246e0a785fb6df19020bb0bc295bcd1177c3e2ddf87ec0ac21d25a7e798 WatchSource:0}: Error finding container cc519246e0a785fb6df19020bb0bc295bcd1177c3e2ddf87ec0ac21d25a7e798: Status 404 returned error can't find the container with id cc519246e0a785fb6df19020bb0bc295bcd1177c3e2ddf87ec0ac21d25a7e798 Nov 27 17:09:39 crc kubenswrapper[4792]: W1127 17:09:39.416816 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-745a27fe94f80d1a2b096f62027c5f98941767ac2d6773b9e4d541e470bc40b3 WatchSource:0}: Error finding container 745a27fe94f80d1a2b096f62027c5f98941767ac2d6773b9e4d541e470bc40b3: Status 404 returned error can't find the container with id 745a27fe94f80d1a2b096f62027c5f98941767ac2d6773b9e4d541e470bc40b3 Nov 27 17:09:39 crc kubenswrapper[4792]: W1127 17:09:39.485680 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 27 17:09:39 crc kubenswrapper[4792]: E1127 17:09:39.485883 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.519945 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.521562 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.521625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.521679 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.521729 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 27 17:09:39 crc kubenswrapper[4792]: E1127 17:09:39.522527 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.214:6443: connect: connection refused" node="crc" Nov 27 17:09:39 crc kubenswrapper[4792]: W1127 17:09:39.583971 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 27 17:09:39 crc kubenswrapper[4792]: E1127 17:09:39.584090 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.603373 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.612439 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:43:51.101737035 +0000 UTC Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.612521 4792 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 504h34m11.489220234s for next certificate rotation Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.690633 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"cc519246e0a785fb6df19020bb0bc295bcd1177c3e2ddf87ec0ac21d25a7e798"} Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.692234 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3c45aa21bd58a265147990a9ea846d082ebb2352985cffee3f087bf1076ae4e4"} Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.694244 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0396aace4c2abee8e9ee87075e989eaf855f28e02ef61b189cf75efecb3c1547"} Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.695820 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"36c3da025c92870b371cb6fcc004a3fec6729fb5938029cf0595fc13fd12bdd2"} Nov 27 17:09:39 crc kubenswrapper[4792]: I1127 17:09:39.697019 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"745a27fe94f80d1a2b096f62027c5f98941767ac2d6773b9e4d541e470bc40b3"} Nov 27 17:09:39 crc kubenswrapper[4792]: W1127 17:09:39.944044 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 27 17:09:39 crc kubenswrapper[4792]: E1127 17:09:39.944154 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 27 17:09:40 crc kubenswrapper[4792]: E1127 17:09:40.018913 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="1.6s" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.323456 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.325273 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.325309 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.325317 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.325341 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 27 17:09:40 crc kubenswrapper[4792]: E1127 17:09:40.325731 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.214:6443: connect: connection refused" node="crc" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.603134 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.702260 4792 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43" exitCode=0 Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.702359 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43"} Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.702531 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.703806 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.703874 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.703901 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.704893 4792 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5ed0f9e54c980f6d41eb66d9aff930cedaf77b0dbebc024424a1f4104d935fb3" exitCode=0 Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.704973 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5ed0f9e54c980f6d41eb66d9aff930cedaf77b0dbebc024424a1f4104d935fb3"} Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.705008 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.706224 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.706266 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.706278 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.707347 4792 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e" exitCode=0 Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.707400 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e"} Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.707422 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.708335 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.708368 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.708378 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.710278 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689"} Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.710313 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd"} Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.711812 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea" exitCode=0 Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.711845 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea"} Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.711999 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.713291 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.713336 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.713354 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.719244 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.722334 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.722360 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:40 crc kubenswrapper[4792]: I1127 17:09:40.722371 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:41 crc kubenswrapper[4792]: W1127 17:09:41.098212 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 27 17:09:41 crc kubenswrapper[4792]: E1127 17:09:41.098290 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 27 17:09:41 crc kubenswrapper[4792]: W1127 17:09:41.530618 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 27 17:09:41 crc kubenswrapper[4792]: E1127 17:09:41.530727 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 27 17:09:41 crc kubenswrapper[4792]: W1127 17:09:41.564445 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 27 17:09:41 crc kubenswrapper[4792]: E1127 17:09:41.564537 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.603417 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Nov 27 17:09:41 crc kubenswrapper[4792]: E1127 17:09:41.620358 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="3.2s" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.718309 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156"} Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.718350 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6"} Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.718516 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.720608 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.720637 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.720672 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.721592 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2"} Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.721675 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99"} Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.721696 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c"} Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.721708 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33"} Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.723586 4792 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da" exitCode=0 Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.723710 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.723736 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da"} Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.724608 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.724636 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.724663 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.725368 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bb6802de491430032bdb89c4fa8cb01659a4b3ddba39610cab5770246880a2fc"} Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.725432 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.727097 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.727121 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.727130 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.728505 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a5269565766d88e342561fc7ff70f8cf278e2acafacb637f8009a114a88e4a59"} Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.728531 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ab4883cae4263c4d72157da41d2859a7beef96d5ddc0f8f22ec70818cd82dbc4"} Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.728542 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5e37142b60c3f7602dce42d9af68452758dc9f8285471d39bcd317d1fc6385d0"} Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.728553 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.729162 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.729183 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.729192 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.926551 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.927449 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.927473 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.927481 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:41 crc kubenswrapper[4792]: I1127 17:09:41.927499 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 27 17:09:41 crc kubenswrapper[4792]: E1127 17:09:41.927996 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.214:6443: connect: connection refused" node="crc" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.127140 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.733545 4792 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a" exitCode=0 Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.733683 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.733629 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a"} Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.735326 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.735368 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.735383 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.738743 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.739196 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.739252 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.739307 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.739620 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40"} Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.739673 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.740050 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.740081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.740096 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.740096 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.740244 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.740260 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.740548 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.740598 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.740615 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.741015 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.741051 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:42 crc kubenswrapper[4792]: I1127 17:09:42.741068 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:43 crc kubenswrapper[4792]: I1127 17:09:43.743260 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9"} Nov 27 17:09:43 crc kubenswrapper[4792]: I1127 17:09:43.743333 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:43 crc kubenswrapper[4792]: I1127 17:09:43.743358 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 27 17:09:43 crc kubenswrapper[4792]: I1127 17:09:43.743328 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7"} Nov 27 17:09:43 crc kubenswrapper[4792]: I1127 17:09:43.743478 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758"} Nov 27 17:09:43 crc kubenswrapper[4792]: I1127 17:09:43.743497 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6"} Nov 27 17:09:43 crc kubenswrapper[4792]: I1127 17:09:43.743454 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:43 crc kubenswrapper[4792]: I1127 17:09:43.744349 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:43 crc kubenswrapper[4792]: I1127 17:09:43.744381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:43 crc kubenswrapper[4792]: I1127 17:09:43.744392 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:43 crc kubenswrapper[4792]: I1127 17:09:43.744349 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:43 crc kubenswrapper[4792]: I1127 17:09:43.744476 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:43 crc kubenswrapper[4792]: I1127 17:09:43.744488 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:44 crc kubenswrapper[4792]: I1127 17:09:44.583909 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:44 crc kubenswrapper[4792]: I1127 17:09:44.752963 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18"} Nov 27 17:09:44 crc kubenswrapper[4792]: I1127 17:09:44.753013 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 27 17:09:44 crc kubenswrapper[4792]: I1127 17:09:44.753069 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:44 crc kubenswrapper[4792]: I1127 17:09:44.753121 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:44 crc kubenswrapper[4792]: I1127 17:09:44.754628 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:44 crc kubenswrapper[4792]: I1127 17:09:44.754711 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:44 crc kubenswrapper[4792]: I1127 17:09:44.754736 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:44 crc kubenswrapper[4792]: I1127 17:09:44.754835 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:44 crc kubenswrapper[4792]: I1127 17:09:44.754885 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:44 crc kubenswrapper[4792]: I1127 17:09:44.754907 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:45 crc kubenswrapper[4792]: I1127 17:09:45.128076 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:45 crc kubenswrapper[4792]: I1127 17:09:45.129268 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:45 crc kubenswrapper[4792]: I1127 17:09:45.129309 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:45 crc kubenswrapper[4792]: I1127 17:09:45.129324 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:45 crc kubenswrapper[4792]: I1127 17:09:45.129349 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 27 17:09:45 crc kubenswrapper[4792]: I1127 17:09:45.755019 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:45 crc kubenswrapper[4792]: I1127 17:09:45.755995 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:45 crc kubenswrapper[4792]: I1127 17:09:45.756021 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:45 crc kubenswrapper[4792]: I1127 17:09:45.756029 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.117705 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.117914 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.117959 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.119039 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.119078 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.119090 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.213902 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.214087 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.215079 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.215108 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.215120 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.647232 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.647427 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.649098 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.649141 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:47 crc kubenswrapper[4792]: I1127 17:09:47.649152 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:48 crc kubenswrapper[4792]: I1127 17:09:48.567609 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:48 crc kubenswrapper[4792]: I1127 17:09:48.568060 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:48 crc kubenswrapper[4792]: I1127 17:09:48.569771 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:48 crc kubenswrapper[4792]: I1127 17:09:48.569837 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:48 crc kubenswrapper[4792]: I1127 17:09:48.569855 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:48 crc kubenswrapper[4792]: I1127 17:09:48.815363 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:48 crc kubenswrapper[4792]: I1127 17:09:48.815617 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:48 crc kubenswrapper[4792]: I1127 17:09:48.816955 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:48 crc kubenswrapper[4792]: I1127 17:09:48.817002 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:48 crc kubenswrapper[4792]: I1127 17:09:48.817016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:48 crc kubenswrapper[4792]: E1127 17:09:48.821707 4792 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 27 17:09:48 crc kubenswrapper[4792]: I1127 17:09:48.824108 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:49 crc kubenswrapper[4792]: I1127 17:09:49.065634 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 27 17:09:49 crc kubenswrapper[4792]: I1127 17:09:49.065928 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:49 crc kubenswrapper[4792]: I1127 17:09:49.067788 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:49 crc kubenswrapper[4792]: I1127 17:09:49.067846 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:49 crc kubenswrapper[4792]: I1127 17:09:49.067866 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:49 crc kubenswrapper[4792]: I1127 17:09:49.766505 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:49 crc kubenswrapper[4792]: I1127 17:09:49.766707 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:49 crc kubenswrapper[4792]: I1127 17:09:49.768310 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:49 crc kubenswrapper[4792]: I1127 17:09:49.768373 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:49 crc kubenswrapper[4792]: I1127 17:09:49.768394 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:50 crc kubenswrapper[4792]: I1127 17:09:50.539201 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 27 17:09:50 crc kubenswrapper[4792]: I1127 17:09:50.539374 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:50 crc kubenswrapper[4792]: I1127 17:09:50.540328 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:50 crc kubenswrapper[4792]: I1127 17:09:50.540361 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:50 crc kubenswrapper[4792]: I1127 17:09:50.540369 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:50 crc kubenswrapper[4792]: I1127 17:09:50.647589 4792 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 27 17:09:50 crc kubenswrapper[4792]: I1127 17:09:50.647721 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 27 17:09:50 crc kubenswrapper[4792]: I1127 17:09:50.768882 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:50 crc kubenswrapper[4792]: I1127 17:09:50.773969 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:50 crc kubenswrapper[4792]: I1127 17:09:50.774148 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:50 crc kubenswrapper[4792]: I1127 17:09:50.774219 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:50 crc kubenswrapper[4792]: I1127 17:09:50.781362 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:51 crc kubenswrapper[4792]: I1127 17:09:51.772495 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:51 crc kubenswrapper[4792]: I1127 17:09:51.773531 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:51 crc kubenswrapper[4792]: I1127 17:09:51.773572 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:51 crc kubenswrapper[4792]: I1127 17:09:51.773605 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:52 crc kubenswrapper[4792]: W1127 17:09:52.427334 4792 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 27 17:09:52 crc kubenswrapper[4792]: I1127 17:09:52.427443 4792 trace.go:236] Trace[240259433]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Nov-2025 17:09:42.426) (total time: 10001ms): Nov 27 17:09:52 crc kubenswrapper[4792]: Trace[240259433]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:09:52.427) Nov 27 17:09:52 crc kubenswrapper[4792]: Trace[240259433]: [10.001319189s] [10.001319189s] END Nov 27 17:09:52 crc kubenswrapper[4792]: E1127 17:09:52.427463 4792 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 27 17:09:52 crc kubenswrapper[4792]: I1127 17:09:52.603890 4792 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 27 17:09:52 crc kubenswrapper[4792]: I1127 17:09:52.775713 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 27 17:09:52 crc kubenswrapper[4792]: I1127 17:09:52.777557 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40" exitCode=255 Nov 27 17:09:52 crc kubenswrapper[4792]: I1127 17:09:52.777610 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40"} Nov 27 17:09:52 crc kubenswrapper[4792]: I1127 17:09:52.777832 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:52 crc kubenswrapper[4792]: I1127 17:09:52.778685 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:52 crc kubenswrapper[4792]: I1127 17:09:52.778714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:52 crc kubenswrapper[4792]: I1127 17:09:52.778726 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:52 crc kubenswrapper[4792]: I1127 17:09:52.779334 4792 scope.go:117] "RemoveContainer" containerID="503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40" Nov 27 17:09:52 crc kubenswrapper[4792]: I1127 17:09:52.782972 4792 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 27 17:09:52 crc kubenswrapper[4792]: I1127 17:09:52.783021 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 27 17:09:52 crc kubenswrapper[4792]: I1127 17:09:52.796119 4792 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 27 17:09:52 crc kubenswrapper[4792]: I1127 17:09:52.796180 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 27 17:09:53 crc kubenswrapper[4792]: I1127 17:09:53.781875 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 27 17:09:53 crc kubenswrapper[4792]: I1127 17:09:53.784603 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9"} Nov 27 17:09:53 crc kubenswrapper[4792]: I1127 17:09:53.784770 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:53 crc kubenswrapper[4792]: I1127 17:09:53.785600 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:53 crc kubenswrapper[4792]: I1127 17:09:53.785633 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:53 crc kubenswrapper[4792]: I1127 17:09:53.785655 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:56 crc kubenswrapper[4792]: I1127 17:09:56.777568 4792 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 27 17:09:57 crc kubenswrapper[4792]: I1127 17:09:57.122116 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:57 crc kubenswrapper[4792]: I1127 17:09:57.122380 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:09:57 crc kubenswrapper[4792]: I1127 17:09:57.122492 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:57 crc kubenswrapper[4792]: I1127 17:09:57.123826 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:09:57 crc kubenswrapper[4792]: I1127 17:09:57.123869 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:09:57 crc kubenswrapper[4792]: I1127 17:09:57.123885 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:09:57 crc kubenswrapper[4792]: I1127 17:09:57.126165 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:57 crc kubenswrapper[4792]: E1127 17:09:57.780606 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 27 17:09:57 crc kubenswrapper[4792]: I1127 17:09:57.780708 4792 trace.go:236] Trace[38373769]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Nov-2025 17:09:44.450) (total time: 13330ms): Nov 27 17:09:57 crc kubenswrapper[4792]: Trace[38373769]: ---"Objects listed" error: 13330ms (17:09:57.780) Nov 27 17:09:57 crc kubenswrapper[4792]: Trace[38373769]: [13.330089702s] [13.330089702s] END Nov 27 17:09:57 crc kubenswrapper[4792]: I1127 17:09:57.780845 4792 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 27 17:09:57 crc kubenswrapper[4792]: I1127 17:09:57.780942 4792 trace.go:236] Trace[153759987]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Nov-2025 17:09:47.734) (total time: 10046ms): Nov 27 17:09:57 crc kubenswrapper[4792]: Trace[153759987]: ---"Objects listed" error: 10046ms (17:09:57.780) Nov 27 17:09:57 crc kubenswrapper[4792]: Trace[153759987]: [10.04673858s] [10.04673858s] END Nov 27 17:09:57 crc kubenswrapper[4792]: I1127 17:09:57.780962 4792 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 27 17:09:57 crc kubenswrapper[4792]: I1127 17:09:57.781066 4792 trace.go:236] Trace[2066141276]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Nov-2025 17:09:45.146) (total time: 12634ms): Nov 27 17:09:57 crc kubenswrapper[4792]: Trace[2066141276]: ---"Objects listed" error: 12634ms (17:09:57.780) Nov 27 17:09:57 crc kubenswrapper[4792]: Trace[2066141276]: [12.634946473s] [12.634946473s] END Nov 27 17:09:57 crc kubenswrapper[4792]: I1127 17:09:57.781090 4792 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 27 17:09:57 crc kubenswrapper[4792]: I1127 17:09:57.782404 4792 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 27 17:09:57 crc kubenswrapper[4792]: E1127 17:09:57.783822 4792 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.587487 4792 apiserver.go:52] "Watching apiserver" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.590698 4792 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.591078 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-v6h2x","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.591483 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.591520 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.591488 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.591613 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.591751 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.591805 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.592091 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-v6h2x" Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.592217 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.592657 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.592722 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.594398 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.594545 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.594783 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.594402 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.594950 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.595262 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.595374 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.595442 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.595442 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.595698 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.595948 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.596824 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.612997 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.613589 4792 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.628227 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.641524 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.647864 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.652772 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.654167 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.658866 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.662369 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.682406 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688414 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688462 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688488 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688510 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688535 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688558 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688578 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688599 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688624 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688674 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688697 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688718 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688739 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688758 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688782 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688808 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688834 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688870 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688898 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688917 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688948 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688977 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689003 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689031 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689059 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689094 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689124 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689154 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689187 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689217 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689244 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689265 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689290 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689311 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689334 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689520 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689557 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689589 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689613 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689664 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689692 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689725 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689767 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689787 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689809 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689835 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689862 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689888 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.688807 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689916 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689939 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689966 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689988 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690012 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690039 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690067 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690093 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690119 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690150 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690174 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690238 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690268 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690303 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690331 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690363 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690387 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690410 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690433 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690471 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690498 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690529 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690656 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690728 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.692497 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.692624 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.692681 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.692727 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.692755 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.692788 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.692817 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689037 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.693551 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689102 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689295 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689285 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689341 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689335 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689505 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689536 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689589 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689774 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689818 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689889 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.689963 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690115 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690142 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690144 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690160 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690144 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690263 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690254 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690392 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690509 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690527 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690559 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690607 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.690745 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.692728 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.692768 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.692982 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.693017 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.693054 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.693105 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.693155 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.693215 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.693241 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.693314 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.693365 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.693595 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.693672 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.693726 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.694160 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.694248 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.694331 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.694537 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.694794 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.694816 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.694850 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.694875 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.695229 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.695346 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.695568 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.695573 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.695869 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.695941 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.696001 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.696109 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.696136 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:09:59.196060261 +0000 UTC m=+21.538886579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.692842 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.697036 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.697415 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.696969 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.697725 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.697815 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.697825 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.697856 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.698004 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.698034 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.698253 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.698290 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.698359 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.698411 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.698448 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.698498 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.698523 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.698723 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.699065 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.698768 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.698748 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.699913 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.699957 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.699992 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700018 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700045 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700073 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700098 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700129 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700158 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700192 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700218 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700248 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700279 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700305 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700333 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700361 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700391 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700420 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700452 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700481 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700509 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700534 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700566 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700589 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700619 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700667 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700699 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700727 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700757 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700790 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700814 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700844 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700873 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700896 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700923 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700949 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.700982 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.701004 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.701017 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.701032 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.701062 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.701094 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.701360 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.701833 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.702004 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.702048 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.702104 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.702598 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.703143 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.703178 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.703198 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.703252 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.703309 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.703325 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.703361 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.703565 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.704896 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.704901 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.705442 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.705628 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.705635 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.705867 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.706311 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.706518 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.706735 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707133 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707193 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707234 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707264 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707299 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707323 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707350 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707379 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707405 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707433 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707458 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707482 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707513 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707539 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707568 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707591 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707615 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.707639 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.711695 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.711739 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.711771 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.711797 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.711825 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.711849 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.711941 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712040 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712079 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712117 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712155 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712187 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712214 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712248 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712283 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712315 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712352 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712386 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712416 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712443 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712472 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712500 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712531 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712561 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712588 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712614 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712671 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712703 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712729 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712760 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712788 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712815 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712841 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712869 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712897 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712922 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712951 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.712982 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713011 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713040 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713069 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713097 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713126 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713155 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713271 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713309 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713347 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xgdl\" (UniqueName: \"kubernetes.io/projected/d9bfa510-5a70-4b77-a579-9907b15f8176-kube-api-access-5xgdl\") pod \"node-resolver-v6h2x\" (UID: \"d9bfa510-5a70-4b77-a579-9907b15f8176\") " pod="openshift-dns/node-resolver-v6h2x" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713380 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713413 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713440 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713471 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713500 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d9bfa510-5a70-4b77-a579-9907b15f8176-hosts-file\") pod \"node-resolver-v6h2x\" (UID: \"d9bfa510-5a70-4b77-a579-9907b15f8176\") " pod="openshift-dns/node-resolver-v6h2x" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713527 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713550 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713586 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713616 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713672 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713710 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713759 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713794 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713911 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713938 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713957 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713972 4792 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713987 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714004 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714019 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714034 4792 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714047 4792 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714066 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.713781 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714080 4792 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714759 4792 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714794 4792 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714824 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714850 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714878 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714902 4792 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714916 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714937 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714950 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714963 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714976 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.714990 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715003 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715016 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715030 4792 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715042 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715055 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715066 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715082 4792 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715094 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715106 4792 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715118 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715131 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715143 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715161 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715173 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715187 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715197 4792 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715208 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715221 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715232 4792 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715243 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715254 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715269 4792 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715279 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715291 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715302 4792 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715315 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715325 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715336 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715351 4792 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715363 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715376 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715389 4792 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715403 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715414 4792 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715424 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715436 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715449 4792 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715460 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715471 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715482 4792 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715497 4792 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715507 4792 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715517 4792 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715531 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715541 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715552 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715564 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715578 4792 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715588 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715599 4792 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715612 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715626 4792 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715637 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715653 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715681 4792 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715697 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.715709 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.716036 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.716616 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.716856 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.717047 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.717156 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.717216 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.717357 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.717534 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.717753 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.718001 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.718186 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.718227 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.718388 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.718460 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.718751 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.718948 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.719399 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.719858 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.720058 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.721050 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.721715 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.723805 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.724777 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.724928 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.726163 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.727439 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.733054 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.733194 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.733225 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.733512 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.734038 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.734153 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.734375 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.734762 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.734982 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.736135 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.736395 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.736649 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.736736 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.736795 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.736978 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.737061 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.737093 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.737241 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.737204 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.737387 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.737493 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.737571 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.738089 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.738162 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.738214 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.738536 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.738541 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.738870 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.739075 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.739223 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.739501 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.739788 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.740300 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.740412 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.740522 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.740711 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.740809 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.740867 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.741114 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.741266 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.741381 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.741403 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.741711 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.742039 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.742104 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.742200 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.742870 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.743920 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.744386 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.744515 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.744804 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.745061 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.745089 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.745275 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.745351 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.745372 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.745449 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.745450 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.745355 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.745563 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.745571 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.745695 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.745711 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.745712 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.745786 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:09:59.245768831 +0000 UTC m=+21.588595139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.746446 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.746794 4792 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.748089 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.749115 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.751005 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.752499 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:09:59.245798672 +0000 UTC m=+21.588624990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.753721 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.754036 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.754024 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.763276 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.763810 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.764253 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.764575 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.764600 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.764614 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.764687 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-27 17:09:59.264648986 +0000 UTC m=+21.607475384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.765091 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.769751 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.770256 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.770948 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.771432 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.771454 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.771466 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.771518 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-27 17:09:59.271497287 +0000 UTC m=+21.614323605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.772891 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.773846 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.774112 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.774154 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.774165 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.775915 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.776294 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.776445 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.779123 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.779850 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.781270 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.783181 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.784064 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.785759 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.786884 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.788350 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.788390 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.789277 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.790681 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.792461 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.793846 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.794389 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.794612 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.795263 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.801789 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.805202 4792 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:09:58 crc kubenswrapper[4792]: E1127 17:09:58.805542 4792 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.806384 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.814676 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817231 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xgdl\" (UniqueName: \"kubernetes.io/projected/d9bfa510-5a70-4b77-a579-9907b15f8176-kube-api-access-5xgdl\") pod \"node-resolver-v6h2x\" (UID: \"d9bfa510-5a70-4b77-a579-9907b15f8176\") " pod="openshift-dns/node-resolver-v6h2x" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817269 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d9bfa510-5a70-4b77-a579-9907b15f8176-hosts-file\") pod \"node-resolver-v6h2x\" (UID: \"d9bfa510-5a70-4b77-a579-9907b15f8176\") " pod="openshift-dns/node-resolver-v6h2x" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817298 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817318 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817392 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817407 4792 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817417 4792 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817377 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d9bfa510-5a70-4b77-a579-9907b15f8176-hosts-file\") pod \"node-resolver-v6h2x\" (UID: \"d9bfa510-5a70-4b77-a579-9907b15f8176\") " pod="openshift-dns/node-resolver-v6h2x" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817568 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817613 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817735 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817754 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817766 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817878 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817889 4792 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817898 4792 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817906 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817917 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817927 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817957 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817966 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817975 4792 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.817985 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818000 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818035 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818044 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818053 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818062 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818070 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818079 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818090 4792 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818113 4792 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818122 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818131 4792 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818140 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818150 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818159 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818168 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818192 4792 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818201 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818210 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818221 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818231 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818240 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818280 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818289 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818298 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818314 4792 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818323 4792 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818347 4792 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818356 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818366 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818376 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818386 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818395 4792 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818405 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818429 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818441 4792 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818450 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818458 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818467 4792 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818476 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818503 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818512 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818521 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818530 4792 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818538 4792 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818546 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818556 4792 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818580 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818590 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818599 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818609 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818618 4792 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818627 4792 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818636 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818666 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818678 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818686 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818695 4792 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818705 4792 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818714 4792 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818739 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818750 4792 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818759 4792 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818769 4792 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818778 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818787 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818794 4792 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818820 4792 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818829 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818837 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818846 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818855 4792 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818863 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818875 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818900 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818909 4792 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818918 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818927 4792 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818936 4792 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818945 4792 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818954 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818963 4792 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818972 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.818999 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.819008 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.819016 4792 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.819025 4792 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.819034 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.819043 4792 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.819054 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.819063 4792 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.819071 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.819080 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.819088 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.819097 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.819105 4792 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.819113 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.819138 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.819151 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.827553 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.837343 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xgdl\" (UniqueName: \"kubernetes.io/projected/d9bfa510-5a70-4b77-a579-9907b15f8176-kube-api-access-5xgdl\") pod \"node-resolver-v6h2x\" (UID: \"d9bfa510-5a70-4b77-a579-9907b15f8176\") " pod="openshift-dns/node-resolver-v6h2x" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.842335 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.854315 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.864383 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.882408 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.896499 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.906955 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.910807 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.919936 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.920332 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.926826 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 27 17:09:58 crc kubenswrapper[4792]: W1127 17:09:58.928551 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-f950bb731146101df2870ca2ec34320faf5a5f17a94ecddcd2ac923072f8993e WatchSource:0}: Error finding container f950bb731146101df2870ca2ec34320faf5a5f17a94ecddcd2ac923072f8993e: Status 404 returned error can't find the container with id f950bb731146101df2870ca2ec34320faf5a5f17a94ecddcd2ac923072f8993e Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.933183 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.933369 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-v6h2x" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.950788 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: W1127 17:09:58.951145 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9bfa510_5a70_4b77_a579_9907b15f8176.slice/crio-4729db6556b42183144669084d0f358277ca621c4db1b936406fc1f4733dba9b WatchSource:0}: Error finding container 4729db6556b42183144669084d0f358277ca621c4db1b936406fc1f4733dba9b: Status 404 returned error can't find the container with id 4729db6556b42183144669084d0f358277ca621c4db1b936406fc1f4733dba9b Nov 27 17:09:58 crc kubenswrapper[4792]: W1127 17:09:58.951901 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-3894c4b7f0c19084bc2c66182d8fe40ea2ec6a853621ccd6f751435e0858dca2 WatchSource:0}: Error finding container 3894c4b7f0c19084bc2c66182d8fe40ea2ec6a853621ccd6f751435e0858dca2: Status 404 returned error can't find the container with id 3894c4b7f0c19084bc2c66182d8fe40ea2ec6a853621ccd6f751435e0858dca2 Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.961585 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:58 crc kubenswrapper[4792]: I1127 17:09:58.980292 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.001569 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.014201 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.025718 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.045548 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.065382 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.139236 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-56bcx"] Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.139658 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.142044 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.142332 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.145890 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.146150 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.146821 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.201159 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gbrqr"] Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.202250 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.208135 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.213540 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.213704 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.213587 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.213669 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.217079 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.222410 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.222487 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36-rootfs\") pod \"machine-config-daemon-56bcx\" (UID: \"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\") " pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.222514 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36-proxy-tls\") pod \"machine-config-daemon-56bcx\" (UID: \"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\") " pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.222560 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36-mcd-auth-proxy-config\") pod \"machine-config-daemon-56bcx\" (UID: \"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\") " pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.222583 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpkzx\" (UniqueName: \"kubernetes.io/projected/8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36-kube-api-access-zpkzx\") pod \"machine-config-daemon-56bcx\" (UID: \"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\") " pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.222745 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:10:00.222725091 +0000 UTC m=+22.565551409 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.230826 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-2dr66"] Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.231551 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.232579 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.235037 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5qmhg"] Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.235422 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.235512 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.235706 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.236017 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.268589 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.297952 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.310477 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vkjf7"] Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.311262 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.314016 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.314075 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.314105 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.314212 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.315402 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.315593 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.315778 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.323893 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-multus-conf-dir\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.323942 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/71907161-f8b0-4b44-b61a-0e04200083f0-multus-daemon-config\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.323968 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.323993 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-run-k8s-cni-cncf-io\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324044 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.324123 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.324168 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.324184 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.324238 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:00.32421919 +0000 UTC m=+22.667045608 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324281 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/71907161-f8b0-4b44-b61a-0e04200083f0-cni-binary-copy\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324301 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-multus-socket-dir-parent\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324322 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36-rootfs\") pod \"machine-config-daemon-56bcx\" (UID: \"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\") " pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324338 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-var-lib-cni-multus\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324360 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36-proxy-tls\") pod \"machine-config-daemon-56bcx\" (UID: \"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\") " pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.324431 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.324494 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:00.324478047 +0000 UTC m=+22.667304435 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324520 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/99059038-38b6-4797-a8ba-be8bfaecfa8a-cni-binary-copy\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324627 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-run-multus-certs\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324667 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-var-lib-cni-bin\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324693 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7nm5\" (UniqueName: \"kubernetes.io/projected/99059038-38b6-4797-a8ba-be8bfaecfa8a-kube-api-access-m7nm5\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324711 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-system-cni-dir\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324732 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-run-netns\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324771 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4n44\" (UniqueName: \"kubernetes.io/projected/71907161-f8b0-4b44-b61a-0e04200083f0-kube-api-access-h4n44\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324792 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-multus-cni-dir\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324862 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs\") pod \"network-metrics-daemon-5qmhg\" (UID: \"2ec75c0b-1943-49d4-8813-bf8cc5218511\") " pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324884 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36-mcd-auth-proxy-config\") pod \"machine-config-daemon-56bcx\" (UID: \"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\") " pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324938 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpkzx\" (UniqueName: \"kubernetes.io/projected/8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36-kube-api-access-zpkzx\") pod \"machine-config-daemon-56bcx\" (UID: \"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\") " pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324965 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/99059038-38b6-4797-a8ba-be8bfaecfa8a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.324990 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-var-lib-kubelet\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.325015 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-hostroot\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.325033 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-os-release\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.325058 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/99059038-38b6-4797-a8ba-be8bfaecfa8a-system-cni-dir\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.325458 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzmmz\" (UniqueName: \"kubernetes.io/projected/2ec75c0b-1943-49d4-8813-bf8cc5218511-kube-api-access-fzmmz\") pod \"network-metrics-daemon-5qmhg\" (UID: \"2ec75c0b-1943-49d4-8813-bf8cc5218511\") " pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.325514 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/99059038-38b6-4797-a8ba-be8bfaecfa8a-os-release\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.325564 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36-rootfs\") pod \"machine-config-daemon-56bcx\" (UID: \"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\") " pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.325653 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/99059038-38b6-4797-a8ba-be8bfaecfa8a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.325687 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.325711 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.325747 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/99059038-38b6-4797-a8ba-be8bfaecfa8a-cnibin\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.325788 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.325799 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36-mcd-auth-proxy-config\") pod \"machine-config-daemon-56bcx\" (UID: \"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\") " pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.325821 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:00.325809978 +0000 UTC m=+22.668636296 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.325870 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-cnibin\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.325929 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.325947 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.325956 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.325997 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-etc-kubernetes\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.326090 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:00.326076844 +0000 UTC m=+22.668903232 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.328935 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.329598 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36-proxy-tls\") pod \"machine-config-daemon-56bcx\" (UID: \"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\") " pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.341206 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.343786 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpkzx\" (UniqueName: \"kubernetes.io/projected/8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36-kube-api-access-zpkzx\") pod \"machine-config-daemon-56bcx\" (UID: \"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\") " pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.352722 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.363130 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.375937 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.391166 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.402527 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.413823 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.424212 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.426584 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-run-multus-certs\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.426927 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-ovn\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.427063 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-cni-bin\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.427197 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/99059038-38b6-4797-a8ba-be8bfaecfa8a-cni-binary-copy\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.427320 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-env-overrides\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.427419 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-var-lib-cni-bin\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.427517 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovnkube-config\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.427629 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-kubelet\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.427752 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-var-lib-openvswitch\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.427873 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7nm5\" (UniqueName: \"kubernetes.io/projected/99059038-38b6-4797-a8ba-be8bfaecfa8a-kube-api-access-m7nm5\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.428003 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-system-cni-dir\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.428104 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-systemd\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.428209 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-run-netns\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.428291 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-run-netns\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.428314 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/99059038-38b6-4797-a8ba-be8bfaecfa8a-cni-binary-copy\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.426674 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-run-multus-certs\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.427549 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-var-lib-cni-bin\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.428121 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-system-cni-dir\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.428554 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4n44\" (UniqueName: \"kubernetes.io/projected/71907161-f8b0-4b44-b61a-0e04200083f0-kube-api-access-h4n44\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.428674 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-multus-cni-dir\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.428781 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs\") pod \"network-metrics-daemon-5qmhg\" (UID: \"2ec75c0b-1943-49d4-8813-bf8cc5218511\") " pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.428869 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-var-lib-kubelet\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.428932 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.428955 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-var-lib-kubelet\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429009 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-multus-cni-dir\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429051 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-hostroot\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429193 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-hostroot\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.429290 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs podName:2ec75c0b-1943-49d4-8813-bf8cc5218511 nodeName:}" failed. No retries permitted until 2025-11-27 17:09:59.929268184 +0000 UTC m=+22.272094502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs") pod "network-metrics-daemon-5qmhg" (UID: "2ec75c0b-1943-49d4-8813-bf8cc5218511") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429346 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-openvswitch\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429377 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/99059038-38b6-4797-a8ba-be8bfaecfa8a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429436 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-os-release\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429461 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovnkube-script-lib\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429511 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/99059038-38b6-4797-a8ba-be8bfaecfa8a-system-cni-dir\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429536 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzmmz\" (UniqueName: \"kubernetes.io/projected/2ec75c0b-1943-49d4-8813-bf8cc5218511-kube-api-access-fzmmz\") pod \"network-metrics-daemon-5qmhg\" (UID: \"2ec75c0b-1943-49d4-8813-bf8cc5218511\") " pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429558 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/99059038-38b6-4797-a8ba-be8bfaecfa8a-os-release\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429597 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/99059038-38b6-4797-a8ba-be8bfaecfa8a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429677 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-cni-netd\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429704 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-etc-kubernetes\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429730 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-node-log\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429780 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovn-node-metrics-cert\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429806 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/99059038-38b6-4797-a8ba-be8bfaecfa8a-cnibin\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429847 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-cnibin\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429868 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-multus-conf-dir\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429891 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/71907161-f8b0-4b44-b61a-0e04200083f0-multus-daemon-config\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429938 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-run-k8s-cni-cncf-io\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.429960 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-etc-openvswitch\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430001 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-log-socket\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430024 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-run-ovn-kubernetes\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430045 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c892m\" (UniqueName: \"kubernetes.io/projected/cd5ee573-9a50-4d09-b129-fb461db20cf6-kube-api-access-c892m\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430077 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/99059038-38b6-4797-a8ba-be8bfaecfa8a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430104 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/71907161-f8b0-4b44-b61a-0e04200083f0-cni-binary-copy\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430129 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-multus-socket-dir-parent\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430153 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430211 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-var-lib-cni-multus\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430236 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-systemd-units\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430254 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-slash\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430271 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-os-release\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430280 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-run-netns\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430360 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-multus-conf-dir\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430374 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-cnibin\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430578 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/99059038-38b6-4797-a8ba-be8bfaecfa8a-system-cni-dir\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430839 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/99059038-38b6-4797-a8ba-be8bfaecfa8a-os-release\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430876 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-var-lib-cni-multus\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430903 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-multus-socket-dir-parent\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430910 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-etc-kubernetes\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430937 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/99059038-38b6-4797-a8ba-be8bfaecfa8a-cnibin\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.430943 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/71907161-f8b0-4b44-b61a-0e04200083f0-host-run-k8s-cni-cncf-io\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.431199 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/71907161-f8b0-4b44-b61a-0e04200083f0-multus-daemon-config\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.431446 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/71907161-f8b0-4b44-b61a-0e04200083f0-cni-binary-copy\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.431695 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/99059038-38b6-4797-a8ba-be8bfaecfa8a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.433451 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.446599 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7nm5\" (UniqueName: \"kubernetes.io/projected/99059038-38b6-4797-a8ba-be8bfaecfa8a-kube-api-access-m7nm5\") pod \"multus-additional-cni-plugins-2dr66\" (UID: \"99059038-38b6-4797-a8ba-be8bfaecfa8a\") " pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.447005 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4n44\" (UniqueName: \"kubernetes.io/projected/71907161-f8b0-4b44-b61a-0e04200083f0-kube-api-access-h4n44\") pod \"multus-gbrqr\" (UID: \"71907161-f8b0-4b44-b61a-0e04200083f0\") " pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.448849 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.449985 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzmmz\" (UniqueName: \"kubernetes.io/projected/2ec75c0b-1943-49d4-8813-bf8cc5218511-kube-api-access-fzmmz\") pod \"network-metrics-daemon-5qmhg\" (UID: \"2ec75c0b-1943-49d4-8813-bf8cc5218511\") " pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.459919 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.468268 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.475004 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.477677 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.484411 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.495379 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.506555 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.512459 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gbrqr" Nov 27 17:09:59 crc kubenswrapper[4792]: W1127 17:09:59.526399 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71907161_f8b0_4b44_b61a_0e04200083f0.slice/crio-3a6ac72442fda3fb9d050e247832ef577ae9d6c7ca4d57d64364e459fc442dfb WatchSource:0}: Error finding container 3a6ac72442fda3fb9d050e247832ef577ae9d6c7ca4d57d64364e459fc442dfb: Status 404 returned error can't find the container with id 3a6ac72442fda3fb9d050e247832ef577ae9d6c7ca4d57d64364e459fc442dfb Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.530796 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-openvswitch\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.530854 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovnkube-script-lib\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.530902 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-cni-netd\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.530920 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-node-log\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.530940 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovn-node-metrics-cert\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.530945 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-openvswitch\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.530978 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-etc-openvswitch\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.530999 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-log-socket\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531033 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-run-ovn-kubernetes\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531050 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c892m\" (UniqueName: \"kubernetes.io/projected/cd5ee573-9a50-4d09-b129-fb461db20cf6-kube-api-access-c892m\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531038 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-cni-netd\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531096 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531128 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-etc-openvswitch\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531159 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531188 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-systemd-units\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531203 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-node-log\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531100 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-run-ovn-kubernetes\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531137 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-systemd-units\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531282 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-slash\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531306 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-run-netns\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531093 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-log-socket\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531330 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-ovn\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531362 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-cni-bin\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531390 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovnkube-config\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531389 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-run-netns\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531403 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-slash\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531417 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-env-overrides\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531448 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-cni-bin\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531454 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-kubelet\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531488 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-var-lib-openvswitch\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531514 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-systemd\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531577 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-systemd\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531620 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-kubelet\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531652 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovnkube-script-lib\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531674 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-var-lib-openvswitch\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.531489 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-ovn\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.532043 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovnkube-config\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.532306 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-env-overrides\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.534149 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovn-node-metrics-cert\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.536245 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.567369 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2dr66" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.570551 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c892m\" (UniqueName: \"kubernetes.io/projected/cd5ee573-9a50-4d09-b129-fb461db20cf6-kube-api-access-c892m\") pod \"ovnkube-node-vkjf7\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.602009 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.625090 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.638843 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 27 17:09:59 crc kubenswrapper[4792]: W1127 17:09:59.660103 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd5ee573_9a50_4d09_b129_fb461db20cf6.slice/crio-a32bcd84a5d310b7576d8b3fced23f0da4d251ca928ad554a63957549ecae5f8 WatchSource:0}: Error finding container a32bcd84a5d310b7576d8b3fced23f0da4d251ca928ad554a63957549ecae5f8: Status 404 returned error can't find the container with id a32bcd84a5d310b7576d8b3fced23f0da4d251ca928ad554a63957549ecae5f8 Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.686360 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.686732 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.689448 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.689610 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.802447 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.802508 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.802519 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3894c4b7f0c19084bc2c66182d8fe40ea2ec6a853621ccd6f751435e0858dca2"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.804129 4792 generic.go:334] "Generic (PLEG): container finished" podID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerID="ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934" exitCode=0 Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.804177 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerDied","Data":"ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.804192 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerStarted","Data":"a32bcd84a5d310b7576d8b3fced23f0da4d251ca928ad554a63957549ecae5f8"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.805997 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" event={"ID":"99059038-38b6-4797-a8ba-be8bfaecfa8a","Type":"ContainerStarted","Data":"b6759293d0e64603dc6e2e20521046c343a069abd1e66990918de094ec68953e"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.807283 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gbrqr" event={"ID":"71907161-f8b0-4b44-b61a-0e04200083f0","Type":"ContainerStarted","Data":"fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.807311 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gbrqr" event={"ID":"71907161-f8b0-4b44-b61a-0e04200083f0","Type":"ContainerStarted","Data":"3a6ac72442fda3fb9d050e247832ef577ae9d6c7ca4d57d64364e459fc442dfb"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.809207 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.809249 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.809261 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"b5477a0b1f77e655c79d61629eaaa4e85243f2efe2097e7a261363d8f427cf7c"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.811187 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-v6h2x" event={"ID":"d9bfa510-5a70-4b77-a579-9907b15f8176","Type":"ContainerStarted","Data":"15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.811234 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-v6h2x" event={"ID":"d9bfa510-5a70-4b77-a579-9907b15f8176","Type":"ContainerStarted","Data":"4729db6556b42183144669084d0f358277ca621c4db1b936406fc1f4733dba9b"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.812185 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"01e029fbd5eec91371ae950d6dd98780f53dea247399290caa8de6b616fd93c1"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.813574 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.813603 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f950bb731146101df2870ca2ec34320faf5a5f17a94ecddcd2ac923072f8993e"} Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.819074 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:09:59Z is after 2025-08-24T17:21:41Z" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.831084 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:09:59Z is after 2025-08-24T17:21:41Z" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.854138 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:09:59Z is after 2025-08-24T17:21:41Z" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.864471 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:09:59Z is after 2025-08-24T17:21:41Z" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.877292 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:09:59Z is after 2025-08-24T17:21:41Z" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.891507 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:09:59Z is after 2025-08-24T17:21:41Z" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.918033 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:09:59Z is after 2025-08-24T17:21:41Z" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.937047 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs\") pod \"network-metrics-daemon-5qmhg\" (UID: \"2ec75c0b-1943-49d4-8813-bf8cc5218511\") " pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.937841 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:09:59 crc kubenswrapper[4792]: E1127 17:09:59.937895 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs podName:2ec75c0b-1943-49d4-8813-bf8cc5218511 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:00.937878239 +0000 UTC m=+23.280704557 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs") pod "network-metrics-daemon-5qmhg" (UID: "2ec75c0b-1943-49d4-8813-bf8cc5218511") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.964048 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:09:59Z is after 2025-08-24T17:21:41Z" Nov 27 17:09:59 crc kubenswrapper[4792]: I1127 17:09:59.999591 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:09:59Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.038565 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.079763 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.120117 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.167504 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.197460 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.240304 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.240509 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:10:02.240494015 +0000 UTC m=+24.583320333 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.248078 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.279635 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.321496 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.341074 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.341408 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.341434 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.341456 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.341243 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.341549 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.341564 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.341579 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.341594 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:02.341581275 +0000 UTC m=+24.684407593 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.341636 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:02.341618146 +0000 UTC m=+24.684444524 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.341521 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.341677 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.341696 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.341708 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.341712 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:02.341699548 +0000 UTC m=+24.684525866 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.341730 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:02.341722998 +0000 UTC m=+24.684549316 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.376624 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.404111 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.442637 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.478356 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.524315 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.561282 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.563692 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.574275 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.617733 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.634170 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.657101 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.685912 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.685941 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.686088 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.686188 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.691544 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.692445 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.693443 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.694335 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.695197 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.695940 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.696515 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.696796 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.697570 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.698525 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.699341 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.700077 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.701937 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.702979 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.703772 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.704513 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.705262 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.706090 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.706703 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.707501 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.708344 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.709032 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.709816 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.710495 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.711621 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.715509 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.716518 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.718219 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.718953 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.720521 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.721453 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.722973 4792 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.723177 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.725159 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.725743 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.726591 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.728127 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.728992 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.729946 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.730706 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.731921 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.732744 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.737836 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.775917 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.817620 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.818249 4792 generic.go:334] "Generic (PLEG): container finished" podID="99059038-38b6-4797-a8ba-be8bfaecfa8a" containerID="0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85" exitCode=0 Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.818550 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" event={"ID":"99059038-38b6-4797-a8ba-be8bfaecfa8a","Type":"ContainerDied","Data":"0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85"} Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.826906 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerStarted","Data":"ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931"} Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.826951 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerStarted","Data":"9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df"} Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.826961 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerStarted","Data":"4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee"} Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.826969 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerStarted","Data":"3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb"} Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.826978 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerStarted","Data":"a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb"} Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.826986 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerStarted","Data":"01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702"} Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.857803 4792 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.880783 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.919382 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.948815 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs\") pod \"network-metrics-daemon-5qmhg\" (UID: \"2ec75c0b-1943-49d4-8813-bf8cc5218511\") " pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.950143 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:10:00 crc kubenswrapper[4792]: E1127 17:10:00.950215 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs podName:2ec75c0b-1943-49d4-8813-bf8cc5218511 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:02.950201795 +0000 UTC m=+25.293028113 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs") pod "network-metrics-daemon-5qmhg" (UID: "2ec75c0b-1943-49d4-8813-bf8cc5218511") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:10:00 crc kubenswrapper[4792]: I1127 17:10:00.961237 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.001835 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.040457 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.078877 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.118306 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.165629 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.200759 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.237737 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.277782 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.315176 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.381005 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.425190 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.460262 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.485382 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.517311 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.557285 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.604288 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.638599 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.680073 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.686214 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.686279 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:01 crc kubenswrapper[4792]: E1127 17:10:01.686366 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:01 crc kubenswrapper[4792]: E1127 17:10:01.686741 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.719137 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.758427 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.798364 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.831411 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" event={"ID":"99059038-38b6-4797-a8ba-be8bfaecfa8a","Type":"ContainerStarted","Data":"5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375"} Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.832871 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013"} Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.844069 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.878560 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.918436 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:01 crc kubenswrapper[4792]: I1127 17:10:01.961778 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.003767 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.041250 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.086505 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.127119 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.160307 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.202829 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.240569 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.261162 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.261489 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:10:06.26145186 +0000 UTC m=+28.604278228 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.286073 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.324364 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.362333 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.362461 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.362535 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.362729 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.362706 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.362812 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.362855 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.362865 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.362884 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.362907 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.362932 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.362859 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:06.362826557 +0000 UTC m=+28.705652925 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.362992 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.363005 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:06.36297987 +0000 UTC m=+28.705806238 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.363101 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:06.363076362 +0000 UTC m=+28.705902730 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.363119 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.363183 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:06.363163625 +0000 UTC m=+28.705989963 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.409174 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.440236 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.488084 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.527406 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.559588 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.599242 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.686229 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.686262 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.686420 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.686537 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.829882 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-q5zkv"] Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.830322 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-q5zkv" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.832066 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.833324 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.833425 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.833755 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.842692 4792 generic.go:334] "Generic (PLEG): container finished" podID="99059038-38b6-4797-a8ba-be8bfaecfa8a" containerID="5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375" exitCode=0 Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.842787 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" event={"ID":"99059038-38b6-4797-a8ba-be8bfaecfa8a","Type":"ContainerDied","Data":"5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375"} Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.847998 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerStarted","Data":"221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2"} Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.857804 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.872211 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.886110 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.896861 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.907367 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.919878 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.956260 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.970029 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ecdda3f2-57fa-4cdf-9b2e-e148452fb25c-serviceca\") pod \"node-ca-q5zkv\" (UID: \"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\") " pod="openshift-image-registry/node-ca-q5zkv" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.970108 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs\") pod \"network-metrics-daemon-5qmhg\" (UID: \"2ec75c0b-1943-49d4-8813-bf8cc5218511\") " pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.970203 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecdda3f2-57fa-4cdf-9b2e-e148452fb25c-host\") pod \"node-ca-q5zkv\" (UID: \"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\") " pod="openshift-image-registry/node-ca-q5zkv" Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.970247 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwkqt\" (UniqueName: \"kubernetes.io/projected/ecdda3f2-57fa-4cdf-9b2e-e148452fb25c-kube-api-access-kwkqt\") pod \"node-ca-q5zkv\" (UID: \"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\") " pod="openshift-image-registry/node-ca-q5zkv" Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.970243 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:10:02 crc kubenswrapper[4792]: E1127 17:10:02.970371 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs podName:2ec75c0b-1943-49d4-8813-bf8cc5218511 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:06.97034622 +0000 UTC m=+29.313172668 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs") pod "network-metrics-daemon-5qmhg" (UID: "2ec75c0b-1943-49d4-8813-bf8cc5218511") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:10:02 crc kubenswrapper[4792]: I1127 17:10:02.998499 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.039112 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.071290 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecdda3f2-57fa-4cdf-9b2e-e148452fb25c-host\") pod \"node-ca-q5zkv\" (UID: \"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\") " pod="openshift-image-registry/node-ca-q5zkv" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.071332 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwkqt\" (UniqueName: \"kubernetes.io/projected/ecdda3f2-57fa-4cdf-9b2e-e148452fb25c-kube-api-access-kwkqt\") pod \"node-ca-q5zkv\" (UID: \"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\") " pod="openshift-image-registry/node-ca-q5zkv" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.071382 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ecdda3f2-57fa-4cdf-9b2e-e148452fb25c-serviceca\") pod \"node-ca-q5zkv\" (UID: \"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\") " pod="openshift-image-registry/node-ca-q5zkv" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.071389 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecdda3f2-57fa-4cdf-9b2e-e148452fb25c-host\") pod \"node-ca-q5zkv\" (UID: \"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\") " pod="openshift-image-registry/node-ca-q5zkv" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.072390 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ecdda3f2-57fa-4cdf-9b2e-e148452fb25c-serviceca\") pod \"node-ca-q5zkv\" (UID: \"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\") " pod="openshift-image-registry/node-ca-q5zkv" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.077942 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.114040 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwkqt\" (UniqueName: \"kubernetes.io/projected/ecdda3f2-57fa-4cdf-9b2e-e148452fb25c-kube-api-access-kwkqt\") pod \"node-ca-q5zkv\" (UID: \"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\") " pod="openshift-image-registry/node-ca-q5zkv" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.144411 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.145007 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-q5zkv" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.177128 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.220549 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.261091 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.306022 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.335691 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.383100 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.424568 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.456186 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.507433 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: W1127 17:10:03.524309 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecdda3f2_57fa_4cdf_9b2e_e148452fb25c.slice/crio-1c6de97e71d9272a5f3c88dec4d61679e9fe912662978d1c16c7a3dab6d6c7af WatchSource:0}: Error finding container 1c6de97e71d9272a5f3c88dec4d61679e9fe912662978d1c16c7a3dab6d6c7af: Status 404 returned error can't find the container with id 1c6de97e71d9272a5f3c88dec4d61679e9fe912662978d1c16c7a3dab6d6c7af Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.536999 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.592259 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.619163 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.659111 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.685857 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.685889 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:03 crc kubenswrapper[4792]: E1127 17:10:03.685953 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:03 crc kubenswrapper[4792]: E1127 17:10:03.686066 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.696885 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.739275 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.778367 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.816165 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.852794 4792 generic.go:334] "Generic (PLEG): container finished" podID="99059038-38b6-4797-a8ba-be8bfaecfa8a" containerID="db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64" exitCode=0 Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.852849 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" event={"ID":"99059038-38b6-4797-a8ba-be8bfaecfa8a","Type":"ContainerDied","Data":"db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64"} Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.854871 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-q5zkv" event={"ID":"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c","Type":"ContainerStarted","Data":"9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274"} Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.854914 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-q5zkv" event={"ID":"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c","Type":"ContainerStarted","Data":"1c6de97e71d9272a5f3c88dec4d61679e9fe912662978d1c16c7a3dab6d6c7af"} Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.858557 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.899338 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.938555 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:03 crc kubenswrapper[4792]: I1127 17:10:03.981964 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:03Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.018489 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.061357 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.098439 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.137899 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.184856 4792 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.184960 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.186558 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.186622 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.186664 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.186805 4792 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.239982 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.251254 4792 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.251501 4792 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.252881 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.252917 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.252928 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.252946 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.252958 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:04Z","lastTransitionTime":"2025-11-27T17:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:04 crc kubenswrapper[4792]: E1127 17:10:04.264943 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.268960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.269004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.269021 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.269043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.269058 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:04Z","lastTransitionTime":"2025-11-27T17:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:04 crc kubenswrapper[4792]: E1127 17:10:04.282533 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.287430 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.287483 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.287499 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.287521 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.287537 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:04Z","lastTransitionTime":"2025-11-27T17:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.297042 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: E1127 17:10:04.300407 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.304516 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.304570 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.304595 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.304618 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.304633 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:04Z","lastTransitionTime":"2025-11-27T17:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:04 crc kubenswrapper[4792]: E1127 17:10:04.318547 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.322413 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.322451 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.322461 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.322478 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.322492 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:04Z","lastTransitionTime":"2025-11-27T17:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:04 crc kubenswrapper[4792]: E1127 17:10:04.335390 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: E1127 17:10:04.335575 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.336628 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.337063 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.337099 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.337112 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.337131 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.337145 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:04Z","lastTransitionTime":"2025-11-27T17:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.379043 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.417069 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.439341 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.439381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.439389 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.439403 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.439412 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:04Z","lastTransitionTime":"2025-11-27T17:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.455768 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.496944 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.540032 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.541324 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.541365 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.541378 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.541398 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.541414 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:04Z","lastTransitionTime":"2025-11-27T17:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.580118 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.625723 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.643937 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.643979 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.644025 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.644052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.644066 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:04Z","lastTransitionTime":"2025-11-27T17:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.661046 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.686292 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:04 crc kubenswrapper[4792]: E1127 17:10:04.686526 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.686881 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:04 crc kubenswrapper[4792]: E1127 17:10:04.687467 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.746002 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.746353 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.746365 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.746381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.746392 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:04Z","lastTransitionTime":"2025-11-27T17:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.848065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.848103 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.848111 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.848125 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.848136 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:04Z","lastTransitionTime":"2025-11-27T17:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.861169 4792 generic.go:334] "Generic (PLEG): container finished" podID="99059038-38b6-4797-a8ba-be8bfaecfa8a" containerID="4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6" exitCode=0 Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.861222 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" event={"ID":"99059038-38b6-4797-a8ba-be8bfaecfa8a","Type":"ContainerDied","Data":"4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6"} Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.873885 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.923948 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.942755 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.950239 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.950271 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.950279 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.950293 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.950302 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:04Z","lastTransitionTime":"2025-11-27T17:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.954370 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.965312 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.978038 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:04 crc kubenswrapper[4792]: I1127 17:10:04.988916 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:04Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.006860 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.018432 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.052213 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.052262 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.052273 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.052290 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.052304 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:05Z","lastTransitionTime":"2025-11-27T17:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.060098 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.097843 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.141735 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.155023 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.155089 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.155111 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.155140 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.155161 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:05Z","lastTransitionTime":"2025-11-27T17:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.185510 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.216932 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.257669 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.257706 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.257716 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.257732 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.257744 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:05Z","lastTransitionTime":"2025-11-27T17:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.261404 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.300339 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.359917 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.359956 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.359965 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.359979 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.359988 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:05Z","lastTransitionTime":"2025-11-27T17:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.462151 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.462206 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.462222 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.462241 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.462253 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:05Z","lastTransitionTime":"2025-11-27T17:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.565122 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.565176 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.565196 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.565226 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.565250 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:05Z","lastTransitionTime":"2025-11-27T17:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.668438 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.668521 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.668552 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.668581 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.668602 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:05Z","lastTransitionTime":"2025-11-27T17:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.686153 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.686197 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:05 crc kubenswrapper[4792]: E1127 17:10:05.686363 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:05 crc kubenswrapper[4792]: E1127 17:10:05.686497 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.771785 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.771822 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.771832 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.771846 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.771856 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:05Z","lastTransitionTime":"2025-11-27T17:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.869066 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerStarted","Data":"3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb"} Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.869367 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.869407 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.874108 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.874149 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.874165 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.874185 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.874200 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:05Z","lastTransitionTime":"2025-11-27T17:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.874118 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" event={"ID":"99059038-38b6-4797-a8ba-be8bfaecfa8a","Type":"ContainerStarted","Data":"e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa"} Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.886967 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.905738 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.906007 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.908427 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.922794 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.933812 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.944508 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.960636 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.975171 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.978121 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.978158 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.978173 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.978191 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.978204 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:05Z","lastTransitionTime":"2025-11-27T17:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:05 crc kubenswrapper[4792]: I1127 17:10:05.990804 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.006370 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.019315 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.036203 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.049840 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.071975 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.081333 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.081378 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.081442 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.081462 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.081474 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:06Z","lastTransitionTime":"2025-11-27T17:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.089216 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.107869 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.123329 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.137004 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.151111 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.164420 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.179897 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.183108 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.183133 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.183141 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.183154 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.183162 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:06Z","lastTransitionTime":"2025-11-27T17:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.203515 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.218146 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.230880 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.257691 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.285408 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.285437 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.285445 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.285458 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.285468 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:06Z","lastTransitionTime":"2025-11-27T17:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.299899 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.306179 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:10:06 crc kubenswrapper[4792]: E1127 17:10:06.306344 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:10:14.306328538 +0000 UTC m=+36.649154856 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.346563 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.387806 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.388092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.388187 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.388284 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.388373 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:06Z","lastTransitionTime":"2025-11-27T17:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.389604 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.407562 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:06 crc kubenswrapper[4792]: E1127 17:10:06.407785 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:10:06 crc kubenswrapper[4792]: E1127 17:10:06.407826 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:10:06 crc kubenswrapper[4792]: E1127 17:10:06.407839 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:06 crc kubenswrapper[4792]: E1127 17:10:06.407883 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:14.407869268 +0000 UTC m=+36.750695586 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.407804 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.408000 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.408048 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:06 crc kubenswrapper[4792]: E1127 17:10:06.408149 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:10:06 crc kubenswrapper[4792]: E1127 17:10:06.408193 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:10:06 crc kubenswrapper[4792]: E1127 17:10:06.408249 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:10:06 crc kubenswrapper[4792]: E1127 17:10:06.408269 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:06 crc kubenswrapper[4792]: E1127 17:10:06.408175 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:10:06 crc kubenswrapper[4792]: E1127 17:10:06.408393 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:14.40837628 +0000 UTC m=+36.751202598 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:10:06 crc kubenswrapper[4792]: E1127 17:10:06.408467 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:14.408450292 +0000 UTC m=+36.751276630 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:06 crc kubenswrapper[4792]: E1127 17:10:06.408504 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:14.408491773 +0000 UTC m=+36.751318191 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.418117 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.457927 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.491027 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.491129 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.491155 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.491185 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.491206 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:06Z","lastTransitionTime":"2025-11-27T17:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.500142 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.538523 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.577135 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.593477 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.593537 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.593554 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.593580 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.593603 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:06Z","lastTransitionTime":"2025-11-27T17:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.686137 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:06 crc kubenswrapper[4792]: E1127 17:10:06.686797 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.686207 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:06 crc kubenswrapper[4792]: E1127 17:10:06.687224 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.696844 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.696955 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.696981 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.697014 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.697038 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:06Z","lastTransitionTime":"2025-11-27T17:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.809171 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.809460 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.809476 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.809510 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.809524 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:06Z","lastTransitionTime":"2025-11-27T17:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.879608 4792 generic.go:334] "Generic (PLEG): container finished" podID="99059038-38b6-4797-a8ba-be8bfaecfa8a" containerID="e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa" exitCode=0 Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.879790 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.879930 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" event={"ID":"99059038-38b6-4797-a8ba-be8bfaecfa8a","Type":"ContainerDied","Data":"e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa"} Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.894502 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.908517 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.912175 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.912294 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.912377 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.912455 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.912517 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:06Z","lastTransitionTime":"2025-11-27T17:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.926881 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.939078 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.957448 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.970539 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.983838 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:06 crc kubenswrapper[4792]: I1127 17:10:06.993924 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.013906 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs\") pod \"network-metrics-daemon-5qmhg\" (UID: \"2ec75c0b-1943-49d4-8813-bf8cc5218511\") " pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:07 crc kubenswrapper[4792]: E1127 17:10:07.014402 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:10:07 crc kubenswrapper[4792]: E1127 17:10:07.014496 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs podName:2ec75c0b-1943-49d4-8813-bf8cc5218511 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:15.014469241 +0000 UTC m=+37.357295579 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs") pod "network-metrics-daemon-5qmhg" (UID: "2ec75c0b-1943-49d4-8813-bf8cc5218511") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.014390 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:07Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.016019 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.016259 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.016456 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.016553 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.016659 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:07Z","lastTransitionTime":"2025-11-27T17:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.030759 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:07Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.046076 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:07Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.058287 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:07Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.098850 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:07Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.119163 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.119212 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.119228 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.119248 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.119263 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:07Z","lastTransitionTime":"2025-11-27T17:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.139321 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:07Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.179374 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:07Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.222382 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:07Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.222695 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.222728 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.222737 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.222752 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.222763 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:07Z","lastTransitionTime":"2025-11-27T17:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.324912 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.324956 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.324967 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.324988 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.325005 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:07Z","lastTransitionTime":"2025-11-27T17:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.427768 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.427797 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.427806 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.427818 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.427826 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:07Z","lastTransitionTime":"2025-11-27T17:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.530607 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.530674 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.530684 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.530698 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.530709 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:07Z","lastTransitionTime":"2025-11-27T17:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.633324 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.633384 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.633397 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.633416 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.633429 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:07Z","lastTransitionTime":"2025-11-27T17:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.686291 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.686361 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:07 crc kubenswrapper[4792]: E1127 17:10:07.686394 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:07 crc kubenswrapper[4792]: E1127 17:10:07.686533 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.735567 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.735612 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.735623 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.735658 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.735672 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:07Z","lastTransitionTime":"2025-11-27T17:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.839098 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.839172 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.839194 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.839221 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.839242 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:07Z","lastTransitionTime":"2025-11-27T17:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.894564 4792 generic.go:334] "Generic (PLEG): container finished" podID="99059038-38b6-4797-a8ba-be8bfaecfa8a" containerID="fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390" exitCode=0 Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.894814 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.894803 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" event={"ID":"99059038-38b6-4797-a8ba-be8bfaecfa8a","Type":"ContainerDied","Data":"fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390"} Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.913026 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:07Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.934196 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:07Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.941759 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.941822 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.941840 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.941865 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.941883 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:07Z","lastTransitionTime":"2025-11-27T17:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.952594 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:07Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:07 crc kubenswrapper[4792]: I1127 17:10:07.973554 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:07Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.003402 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.018605 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.029991 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.044515 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.044567 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.044580 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.044597 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.044608 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:08Z","lastTransitionTime":"2025-11-27T17:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.045890 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.062055 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.072573 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.084731 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.101071 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.112676 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.123171 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.141207 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.146422 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.146463 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.146480 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.146502 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.146515 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:08Z","lastTransitionTime":"2025-11-27T17:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.154995 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.248954 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.248993 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.249004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.249026 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.249036 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:08Z","lastTransitionTime":"2025-11-27T17:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.351663 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.351691 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.351699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.351712 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.351720 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:08Z","lastTransitionTime":"2025-11-27T17:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.453721 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.453768 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.453783 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.453800 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.453812 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:08Z","lastTransitionTime":"2025-11-27T17:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.560280 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.560325 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.560340 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.560359 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.560373 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:08Z","lastTransitionTime":"2025-11-27T17:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.571437 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.594871 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.611105 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.627485 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.640104 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.650283 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.662265 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.662306 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.662317 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.662335 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.662348 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:08Z","lastTransitionTime":"2025-11-27T17:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.667453 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.677988 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.685782 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.685788 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:08 crc kubenswrapper[4792]: E1127 17:10:08.685878 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:08 crc kubenswrapper[4792]: E1127 17:10:08.685995 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.691784 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.704158 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.714039 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.724616 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.742261 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.755789 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.764961 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.765006 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.765069 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.765088 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.765102 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:08Z","lastTransitionTime":"2025-11-27T17:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.770603 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.782286 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.797054 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.811055 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.824295 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.837918 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.848185 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.861010 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.867005 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.867040 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.867052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.867067 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.867079 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:08Z","lastTransitionTime":"2025-11-27T17:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.870863 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.891228 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.900730 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" event={"ID":"99059038-38b6-4797-a8ba-be8bfaecfa8a","Type":"ContainerStarted","Data":"c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c"} Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.903637 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.912142 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.925563 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.936665 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.969824 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.969861 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.969874 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.969892 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.969904 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:08Z","lastTransitionTime":"2025-11-27T17:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:08 crc kubenswrapper[4792]: I1127 17:10:08.979022 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.028284 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.063985 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.071961 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.071990 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.072000 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.072017 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.072029 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:09Z","lastTransitionTime":"2025-11-27T17:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.096530 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.137464 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.177457 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.177494 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.177504 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.177519 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.177529 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:09Z","lastTransitionTime":"2025-11-27T17:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.184027 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.217383 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.257110 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.280807 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.280840 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.280850 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.280862 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.280871 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:09Z","lastTransitionTime":"2025-11-27T17:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.296358 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.338634 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.382348 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.382908 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.382928 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.382936 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.382949 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.382958 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:09Z","lastTransitionTime":"2025-11-27T17:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.448715 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.469322 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.485552 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.485577 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.485586 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.485615 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.485624 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:09Z","lastTransitionTime":"2025-11-27T17:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.496002 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.539324 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.587913 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.587976 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.587993 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.588013 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.588029 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:09Z","lastTransitionTime":"2025-11-27T17:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.588966 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.623585 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.658611 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.685827 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.685833 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:09 crc kubenswrapper[4792]: E1127 17:10:09.686159 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:09 crc kubenswrapper[4792]: E1127 17:10:09.686022 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.691368 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.691409 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.691420 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.691437 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.691450 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:09Z","lastTransitionTime":"2025-11-27T17:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.700342 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.738109 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.785821 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.793300 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.793364 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.793383 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.793408 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.793427 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:09Z","lastTransitionTime":"2025-11-27T17:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.896554 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.896598 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.896609 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.896629 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.896657 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:09Z","lastTransitionTime":"2025-11-27T17:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.905211 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovnkube-controller/0.log" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.908904 4792 generic.go:334] "Generic (PLEG): container finished" podID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerID="3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb" exitCode=1 Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.909295 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerDied","Data":"3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb"} Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.909978 4792 scope.go:117] "RemoveContainer" containerID="3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.927266 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.940591 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.957587 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.971778 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.986782 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.999772 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.999813 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:09 crc kubenswrapper[4792]: I1127 17:10:09.999822 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:09.999841 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:09.999851 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:09Z","lastTransitionTime":"2025-11-27T17:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.027812 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:09Z\\\",\\\"message\\\":\\\" 5996 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1127 17:10:09.290315 5996 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1127 17:10:09.290371 5996 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1127 17:10:09.290436 5996 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1127 17:10:09.290445 5996 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1127 17:10:09.290466 5996 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1127 17:10:09.290476 5996 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1127 17:10:09.290510 5996 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1127 17:10:09.290542 5996 factory.go:656] Stopping watch factory\\\\nI1127 17:10:09.290560 5996 ovnkube.go:599] Stopped ovnkube\\\\nI1127 17:10:09.290589 5996 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1127 17:10:09.290601 5996 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1127 17:10:09.290608 5996 handler.go:208] Removed *v1.Node event handler 2\\\\nI1127 17:10:09.290615 5996 handler.go:208] Removed *v1.Node event handler 7\\\\nI1127 17:10:09.290622 5996 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1127 17:10:09.290628 5996 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:10Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.058851 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:10Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.102005 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.102092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.102115 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.102146 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.102174 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:10Z","lastTransitionTime":"2025-11-27T17:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.103522 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:10Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.140223 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:10Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.187698 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:10Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.206152 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.206187 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.206198 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.206216 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.206227 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:10Z","lastTransitionTime":"2025-11-27T17:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.220869 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:10Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.263389 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:10Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.307018 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:10Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.308381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.308411 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.308422 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.308441 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.308454 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:10Z","lastTransitionTime":"2025-11-27T17:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.343308 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:10Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.385434 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:10Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.411961 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.412001 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.412013 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.412029 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.412041 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:10Z","lastTransitionTime":"2025-11-27T17:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.423943 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:10Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.514990 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.515103 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.515126 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.515157 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.515180 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:10Z","lastTransitionTime":"2025-11-27T17:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.618927 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.618981 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.618992 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.619014 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.619026 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:10Z","lastTransitionTime":"2025-11-27T17:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.686041 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.686081 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:10 crc kubenswrapper[4792]: E1127 17:10:10.686173 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:10 crc kubenswrapper[4792]: E1127 17:10:10.686299 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.721410 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.721451 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.721464 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.721480 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.721493 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:10Z","lastTransitionTime":"2025-11-27T17:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.824012 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.824060 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.824072 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.824089 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.824100 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:10Z","lastTransitionTime":"2025-11-27T17:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.914602 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovnkube-controller/0.log" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.918315 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerStarted","Data":"7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333"} Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.918465 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.926408 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.926458 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.926474 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.926493 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.926507 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:10Z","lastTransitionTime":"2025-11-27T17:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.938244 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:10Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.953583 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:10Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.970705 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:10Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:10 crc kubenswrapper[4792]: I1127 17:10:10.984877 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:10Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.009923 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:11Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.024237 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:11Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.028247 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.028291 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.028303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.028322 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.028336 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:11Z","lastTransitionTime":"2025-11-27T17:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.039014 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:11Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.054223 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:11Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.081995 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:09Z\\\",\\\"message\\\":\\\" 5996 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1127 17:10:09.290315 5996 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1127 17:10:09.290371 5996 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1127 17:10:09.290436 5996 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1127 17:10:09.290445 5996 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1127 17:10:09.290466 5996 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1127 17:10:09.290476 5996 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1127 17:10:09.290510 5996 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1127 17:10:09.290542 5996 factory.go:656] Stopping watch factory\\\\nI1127 17:10:09.290560 5996 ovnkube.go:599] Stopped ovnkube\\\\nI1127 17:10:09.290589 5996 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1127 17:10:09.290601 5996 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1127 17:10:09.290608 5996 handler.go:208] Removed *v1.Node event handler 2\\\\nI1127 17:10:09.290615 5996 handler.go:208] Removed *v1.Node event handler 7\\\\nI1127 17:10:09.290622 5996 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1127 17:10:09.290628 5996 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:11Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.100623 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:11Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.113318 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:11Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.122704 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:11Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.130001 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.130028 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.130037 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.130050 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.130059 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:11Z","lastTransitionTime":"2025-11-27T17:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.133385 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:11Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.145702 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:11Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.159011 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:11Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.174822 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:11Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.233617 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.233704 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.233717 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.233735 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.233748 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:11Z","lastTransitionTime":"2025-11-27T17:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.337043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.337109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.337131 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.337172 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.337239 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:11Z","lastTransitionTime":"2025-11-27T17:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.440773 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.440872 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.440896 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.440926 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.440949 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:11Z","lastTransitionTime":"2025-11-27T17:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.543613 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.543667 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.543679 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.543696 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.543710 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:11Z","lastTransitionTime":"2025-11-27T17:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.646410 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.646483 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.646496 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.646512 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.646524 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:11Z","lastTransitionTime":"2025-11-27T17:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.686395 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:11 crc kubenswrapper[4792]: E1127 17:10:11.686548 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.686838 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:11 crc kubenswrapper[4792]: E1127 17:10:11.686960 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.749725 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.749786 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.749805 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.749829 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.749847 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:11Z","lastTransitionTime":"2025-11-27T17:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.852314 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.852365 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.852382 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.852406 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.852424 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:11Z","lastTransitionTime":"2025-11-27T17:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.923794 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovnkube-controller/1.log" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.924886 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovnkube-controller/0.log" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.928257 4792 generic.go:334] "Generic (PLEG): container finished" podID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerID="7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333" exitCode=1 Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.928307 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerDied","Data":"7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333"} Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.928363 4792 scope.go:117] "RemoveContainer" containerID="3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.929829 4792 scope.go:117] "RemoveContainer" containerID="7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333" Nov 27 17:10:11 crc kubenswrapper[4792]: E1127 17:10:11.930219 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.955847 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.955904 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.955920 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.955944 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.955961 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:11Z","lastTransitionTime":"2025-11-27T17:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.955915 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:11Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.975309 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:11Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:11 crc kubenswrapper[4792]: I1127 17:10:11.991917 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:11Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.002225 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.018173 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.038111 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.056232 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.058693 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.058773 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.058788 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.058810 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.058829 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:12Z","lastTransitionTime":"2025-11-27T17:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.073019 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.103879 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.120336 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.136066 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.150490 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6"] Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.151197 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.153742 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.153816 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.162372 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.162487 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.162547 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.162621 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.162702 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:12Z","lastTransitionTime":"2025-11-27T17:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.162967 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.165890 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/71e123df-81ab-4743-a865-515eadba43af-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-slvg6\" (UID: \"71e123df-81ab-4743-a865-515eadba43af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.165938 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r94b4\" (UniqueName: \"kubernetes.io/projected/71e123df-81ab-4743-a865-515eadba43af-kube-api-access-r94b4\") pod \"ovnkube-control-plane-749d76644c-slvg6\" (UID: \"71e123df-81ab-4743-a865-515eadba43af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.165965 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/71e123df-81ab-4743-a865-515eadba43af-env-overrides\") pod \"ovnkube-control-plane-749d76644c-slvg6\" (UID: \"71e123df-81ab-4743-a865-515eadba43af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.165979 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/71e123df-81ab-4743-a865-515eadba43af-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-slvg6\" (UID: \"71e123df-81ab-4743-a865-515eadba43af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.181600 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.196832 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.212125 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:09Z\\\",\\\"message\\\":\\\" 5996 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1127 17:10:09.290315 5996 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1127 17:10:09.290371 5996 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1127 17:10:09.290436 5996 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1127 17:10:09.290445 5996 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1127 17:10:09.290466 5996 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1127 17:10:09.290476 5996 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1127 17:10:09.290510 5996 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1127 17:10:09.290542 5996 factory.go:656] Stopping watch factory\\\\nI1127 17:10:09.290560 5996 ovnkube.go:599] Stopped ovnkube\\\\nI1127 17:10:09.290589 5996 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1127 17:10:09.290601 5996 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1127 17:10:09.290608 5996 handler.go:208] Removed *v1.Node event handler 2\\\\nI1127 17:10:09.290615 5996 handler.go:208] Removed *v1.Node event handler 7\\\\nI1127 17:10:09.290622 5996 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1127 17:10:09.290628 5996 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:11Z\\\",\\\"message\\\":\\\"vn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094189 6198 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094013 6198 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 1.888145ms\\\\nF1127 17:10:11.094015 6198 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.224227 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.235804 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.245945 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.257815 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.265571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.265620 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.265633 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.265668 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.265680 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:12Z","lastTransitionTime":"2025-11-27T17:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.266954 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r94b4\" (UniqueName: \"kubernetes.io/projected/71e123df-81ab-4743-a865-515eadba43af-kube-api-access-r94b4\") pod \"ovnkube-control-plane-749d76644c-slvg6\" (UID: \"71e123df-81ab-4743-a865-515eadba43af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.267004 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/71e123df-81ab-4743-a865-515eadba43af-env-overrides\") pod \"ovnkube-control-plane-749d76644c-slvg6\" (UID: \"71e123df-81ab-4743-a865-515eadba43af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.267028 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/71e123df-81ab-4743-a865-515eadba43af-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-slvg6\" (UID: \"71e123df-81ab-4743-a865-515eadba43af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.267098 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/71e123df-81ab-4743-a865-515eadba43af-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-slvg6\" (UID: \"71e123df-81ab-4743-a865-515eadba43af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.267732 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/71e123df-81ab-4743-a865-515eadba43af-env-overrides\") pod \"ovnkube-control-plane-749d76644c-slvg6\" (UID: \"71e123df-81ab-4743-a865-515eadba43af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.267841 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/71e123df-81ab-4743-a865-515eadba43af-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-slvg6\" (UID: \"71e123df-81ab-4743-a865-515eadba43af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.269082 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.276520 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/71e123df-81ab-4743-a865-515eadba43af-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-slvg6\" (UID: \"71e123df-81ab-4743-a865-515eadba43af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.279793 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.283480 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r94b4\" (UniqueName: \"kubernetes.io/projected/71e123df-81ab-4743-a865-515eadba43af-kube-api-access-r94b4\") pod \"ovnkube-control-plane-749d76644c-slvg6\" (UID: \"71e123df-81ab-4743-a865-515eadba43af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.296730 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.307497 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.318570 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.327657 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.337242 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.346790 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.355335 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.364702 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.367897 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.367929 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.367939 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.367952 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.367960 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:12Z","lastTransitionTime":"2025-11-27T17:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.376319 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.390263 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.407427 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:09Z\\\",\\\"message\\\":\\\" 5996 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1127 17:10:09.290315 5996 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1127 17:10:09.290371 5996 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1127 17:10:09.290436 5996 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1127 17:10:09.290445 5996 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1127 17:10:09.290466 5996 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1127 17:10:09.290476 5996 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1127 17:10:09.290510 5996 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1127 17:10:09.290542 5996 factory.go:656] Stopping watch factory\\\\nI1127 17:10:09.290560 5996 ovnkube.go:599] Stopped ovnkube\\\\nI1127 17:10:09.290589 5996 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1127 17:10:09.290601 5996 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1127 17:10:09.290608 5996 handler.go:208] Removed *v1.Node event handler 2\\\\nI1127 17:10:09.290615 5996 handler.go:208] Removed *v1.Node event handler 7\\\\nI1127 17:10:09.290622 5996 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1127 17:10:09.290628 5996 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:11Z\\\",\\\"message\\\":\\\"vn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094189 6198 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094013 6198 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 1.888145ms\\\\nF1127 17:10:11.094015 6198 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.439077 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.463473 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.470320 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.470347 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.470357 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.470371 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.470381 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:12Z","lastTransitionTime":"2025-11-27T17:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:12 crc kubenswrapper[4792]: W1127 17:10:12.477422 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71e123df_81ab_4743_a865_515eadba43af.slice/crio-2094df4cfe9954b9f17895eefc1160eba9ab016fea3050ac7f68254f7c6bf24e WatchSource:0}: Error finding container 2094df4cfe9954b9f17895eefc1160eba9ab016fea3050ac7f68254f7c6bf24e: Status 404 returned error can't find the container with id 2094df4cfe9954b9f17895eefc1160eba9ab016fea3050ac7f68254f7c6bf24e Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.572433 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.572478 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.572489 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.572506 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.572518 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:12Z","lastTransitionTime":"2025-11-27T17:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.674263 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.674288 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.674298 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.674311 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.674320 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:12Z","lastTransitionTime":"2025-11-27T17:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.686511 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:12 crc kubenswrapper[4792]: E1127 17:10:12.686621 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.687284 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:12 crc kubenswrapper[4792]: E1127 17:10:12.687730 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.776973 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.777020 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.777035 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.777054 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.777079 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:12Z","lastTransitionTime":"2025-11-27T17:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.879940 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.879966 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.879975 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.879987 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.879995 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:12Z","lastTransitionTime":"2025-11-27T17:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.933977 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovnkube-controller/1.log" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.939215 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" event={"ID":"71e123df-81ab-4743-a865-515eadba43af","Type":"ContainerStarted","Data":"4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d"} Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.939351 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" event={"ID":"71e123df-81ab-4743-a865-515eadba43af","Type":"ContainerStarted","Data":"01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2"} Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.939412 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" event={"ID":"71e123df-81ab-4743-a865-515eadba43af","Type":"ContainerStarted","Data":"2094df4cfe9954b9f17895eefc1160eba9ab016fea3050ac7f68254f7c6bf24e"} Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.950807 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.963231 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.973687 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.982122 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.982156 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.982165 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.982179 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.982189 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:12Z","lastTransitionTime":"2025-11-27T17:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.987728 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:12 crc kubenswrapper[4792]: I1127 17:10:12.996574 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:12Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.006547 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:13Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.027754 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:13Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.041844 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:13Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.054914 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:13Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.065155 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:13Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.082973 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:09Z\\\",\\\"message\\\":\\\" 5996 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1127 17:10:09.290315 5996 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1127 17:10:09.290371 5996 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1127 17:10:09.290436 5996 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1127 17:10:09.290445 5996 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1127 17:10:09.290466 5996 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1127 17:10:09.290476 5996 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1127 17:10:09.290510 5996 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1127 17:10:09.290542 5996 factory.go:656] Stopping watch factory\\\\nI1127 17:10:09.290560 5996 ovnkube.go:599] Stopped ovnkube\\\\nI1127 17:10:09.290589 5996 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1127 17:10:09.290601 5996 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1127 17:10:09.290608 5996 handler.go:208] Removed *v1.Node event handler 2\\\\nI1127 17:10:09.290615 5996 handler.go:208] Removed *v1.Node event handler 7\\\\nI1127 17:10:09.290622 5996 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1127 17:10:09.290628 5996 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:11Z\\\",\\\"message\\\":\\\"vn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094189 6198 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094013 6198 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 1.888145ms\\\\nF1127 17:10:11.094015 6198 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:13Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.084083 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.084173 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.084247 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.084318 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.084379 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:13Z","lastTransitionTime":"2025-11-27T17:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.094726 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:13Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.104693 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:13Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.112398 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:13Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.122669 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:13Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.132414 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:13Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.142824 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:13Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.186890 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.186919 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.186929 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.186942 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.186950 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:13Z","lastTransitionTime":"2025-11-27T17:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.290474 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.290525 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.290535 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.290550 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.290560 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:13Z","lastTransitionTime":"2025-11-27T17:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.394336 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.394385 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.394396 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.394410 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.394419 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:13Z","lastTransitionTime":"2025-11-27T17:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.497041 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.497087 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.497095 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.497107 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.497116 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:13Z","lastTransitionTime":"2025-11-27T17:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.598908 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.598945 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.598957 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.598976 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.598990 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:13Z","lastTransitionTime":"2025-11-27T17:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.686482 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.686535 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:13 crc kubenswrapper[4792]: E1127 17:10:13.686627 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:13 crc kubenswrapper[4792]: E1127 17:10:13.687347 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.701100 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.701164 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.701187 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.701222 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.701245 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:13Z","lastTransitionTime":"2025-11-27T17:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.803919 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.804144 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.804203 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.804312 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.804392 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:13Z","lastTransitionTime":"2025-11-27T17:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.907674 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.907949 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.908013 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.908093 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:13 crc kubenswrapper[4792]: I1127 17:10:13.908160 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:13Z","lastTransitionTime":"2025-11-27T17:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.011230 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.011268 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.011277 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.011289 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.011299 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:14Z","lastTransitionTime":"2025-11-27T17:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.114436 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.114488 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.114500 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.114518 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.114531 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:14Z","lastTransitionTime":"2025-11-27T17:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.218604 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.218697 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.218718 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.218742 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.218761 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:14Z","lastTransitionTime":"2025-11-27T17:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.321859 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.321906 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.321917 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.321941 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.321951 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:14Z","lastTransitionTime":"2025-11-27T17:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.387425 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.387753 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:10:30.387715187 +0000 UTC m=+52.730541525 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.424240 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.424276 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.424284 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.424296 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.424306 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:14Z","lastTransitionTime":"2025-11-27T17:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.488172 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.488215 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.488242 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.488260 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.488357 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.488398 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.488423 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.488424 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.488451 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.488462 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.488439 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.488413 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:30.488394767 +0000 UTC m=+52.831221085 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.488502 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.488529 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:30.48851464 +0000 UTC m=+52.831340958 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.488758 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:30.488682324 +0000 UTC m=+52.831508682 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.488819 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:30.488800206 +0000 UTC m=+52.831626564 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.504935 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.505037 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.505067 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.505103 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.505128 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:14Z","lastTransitionTime":"2025-11-27T17:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.524870 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:14Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.531177 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.531226 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.531237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.531257 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.531270 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:14Z","lastTransitionTime":"2025-11-27T17:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.550751 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:14Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.554967 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.555027 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.555045 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.555073 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.555092 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:14Z","lastTransitionTime":"2025-11-27T17:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.570059 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:14Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.576247 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.576292 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.576303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.576324 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.576337 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:14Z","lastTransitionTime":"2025-11-27T17:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.595444 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:14Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.599501 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.599573 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.599586 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.599606 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.599617 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:14Z","lastTransitionTime":"2025-11-27T17:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.614502 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:14Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.614621 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.616164 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.616199 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.616209 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.616227 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.616244 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:14Z","lastTransitionTime":"2025-11-27T17:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.685911 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.686171 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.686354 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:14 crc kubenswrapper[4792]: E1127 17:10:14.686525 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.718454 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.718517 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.718539 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.718568 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.718590 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:14Z","lastTransitionTime":"2025-11-27T17:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.822806 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.822872 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.822896 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.822928 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.822949 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:14Z","lastTransitionTime":"2025-11-27T17:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.924833 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.924876 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.924889 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.924909 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:14 crc kubenswrapper[4792]: I1127 17:10:14.924925 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:14Z","lastTransitionTime":"2025-11-27T17:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.028002 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.028085 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.028111 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.028140 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.028163 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:15Z","lastTransitionTime":"2025-11-27T17:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.097027 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs\") pod \"network-metrics-daemon-5qmhg\" (UID: \"2ec75c0b-1943-49d4-8813-bf8cc5218511\") " pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:15 crc kubenswrapper[4792]: E1127 17:10:15.097203 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:10:15 crc kubenswrapper[4792]: E1127 17:10:15.097282 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs podName:2ec75c0b-1943-49d4-8813-bf8cc5218511 nodeName:}" failed. No retries permitted until 2025-11-27 17:10:31.097259913 +0000 UTC m=+53.440086241 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs") pod "network-metrics-daemon-5qmhg" (UID: "2ec75c0b-1943-49d4-8813-bf8cc5218511") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.131740 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.131836 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.131858 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.131883 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.131901 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:15Z","lastTransitionTime":"2025-11-27T17:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.235593 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.235687 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.235707 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.235736 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.235755 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:15Z","lastTransitionTime":"2025-11-27T17:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.338527 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.338566 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.338578 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.338593 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.338605 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:15Z","lastTransitionTime":"2025-11-27T17:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.441069 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.441117 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.441158 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.441196 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.441208 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:15Z","lastTransitionTime":"2025-11-27T17:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.544549 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.544632 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.544657 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.544672 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.544680 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:15Z","lastTransitionTime":"2025-11-27T17:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.646700 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.646745 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.646757 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.646773 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.646787 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:15Z","lastTransitionTime":"2025-11-27T17:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.686264 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:15 crc kubenswrapper[4792]: E1127 17:10:15.686411 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.686264 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:15 crc kubenswrapper[4792]: E1127 17:10:15.686559 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.749321 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.749395 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.749420 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.749456 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.749477 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:15Z","lastTransitionTime":"2025-11-27T17:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.853313 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.853425 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.853464 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.853495 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.853519 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:15Z","lastTransitionTime":"2025-11-27T17:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.956374 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.956442 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.956479 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.956498 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:15 crc kubenswrapper[4792]: I1127 17:10:15.956512 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:15Z","lastTransitionTime":"2025-11-27T17:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.059720 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.059782 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.059798 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.059827 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.059874 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:16Z","lastTransitionTime":"2025-11-27T17:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.162947 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.163017 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.163031 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.163048 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.163059 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:16Z","lastTransitionTime":"2025-11-27T17:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.267069 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.267134 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.267156 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.267186 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.267209 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:16Z","lastTransitionTime":"2025-11-27T17:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.369555 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.369588 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.369596 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.369625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.369667 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:16Z","lastTransitionTime":"2025-11-27T17:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.471973 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.472030 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.472043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.472062 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.472075 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:16Z","lastTransitionTime":"2025-11-27T17:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.575336 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.575385 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.575395 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.575414 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.575427 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:16Z","lastTransitionTime":"2025-11-27T17:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.678995 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.679071 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.679088 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.679107 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.679150 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:16Z","lastTransitionTime":"2025-11-27T17:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.686790 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.686861 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:16 crc kubenswrapper[4792]: E1127 17:10:16.687047 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:16 crc kubenswrapper[4792]: E1127 17:10:16.687241 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.782946 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.783023 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.783036 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.783061 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.783108 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:16Z","lastTransitionTime":"2025-11-27T17:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.890971 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.891008 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.891018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.891032 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.891041 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:16Z","lastTransitionTime":"2025-11-27T17:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.992837 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.992902 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.992915 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.992932 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:16 crc kubenswrapper[4792]: I1127 17:10:16.992945 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:16Z","lastTransitionTime":"2025-11-27T17:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.095528 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.095569 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.095580 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.095596 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.095608 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:17Z","lastTransitionTime":"2025-11-27T17:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.198326 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.198418 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.198439 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.198465 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.198482 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:17Z","lastTransitionTime":"2025-11-27T17:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.301597 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.301702 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.301727 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.301757 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.301780 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:17Z","lastTransitionTime":"2025-11-27T17:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.404545 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.404589 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.404598 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.404615 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.404625 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:17Z","lastTransitionTime":"2025-11-27T17:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.507925 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.508014 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.508037 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.508067 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.508090 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:17Z","lastTransitionTime":"2025-11-27T17:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.610617 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.610678 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.610689 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.610704 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.610713 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:17Z","lastTransitionTime":"2025-11-27T17:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.685969 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:17 crc kubenswrapper[4792]: E1127 17:10:17.686128 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.685968 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:17 crc kubenswrapper[4792]: E1127 17:10:17.686408 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.713339 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.713406 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.713428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.713458 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.713478 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:17Z","lastTransitionTime":"2025-11-27T17:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.816737 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.816796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.816806 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.816826 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.816837 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:17Z","lastTransitionTime":"2025-11-27T17:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.919706 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.919819 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.919842 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.919912 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:17 crc kubenswrapper[4792]: I1127 17:10:17.919935 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:17Z","lastTransitionTime":"2025-11-27T17:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.023126 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.023190 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.023207 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.023232 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.023249 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:18Z","lastTransitionTime":"2025-11-27T17:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.125846 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.125913 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.125925 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.125941 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.125953 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:18Z","lastTransitionTime":"2025-11-27T17:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.229759 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.229842 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.229866 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.229900 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.229919 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:18Z","lastTransitionTime":"2025-11-27T17:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.333024 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.333071 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.333085 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.333103 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.333116 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:18Z","lastTransitionTime":"2025-11-27T17:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.435770 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.435824 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.435847 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.435881 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.435901 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:18Z","lastTransitionTime":"2025-11-27T17:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.538761 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.538826 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.538847 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.538871 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.538891 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:18Z","lastTransitionTime":"2025-11-27T17:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.642045 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.642114 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.642134 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.642159 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.642177 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:18Z","lastTransitionTime":"2025-11-27T17:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.686813 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.686831 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:18 crc kubenswrapper[4792]: E1127 17:10:18.687029 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:18 crc kubenswrapper[4792]: E1127 17:10:18.687100 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.707688 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.723757 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.740227 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.744946 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.745015 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.745038 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.745074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.745098 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:18Z","lastTransitionTime":"2025-11-27T17:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.756816 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.767952 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.781596 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.814397 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.836465 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.847469 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.847528 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.847538 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.847554 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.847587 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:18Z","lastTransitionTime":"2025-11-27T17:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.855109 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.866090 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.880773 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.895771 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.908426 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.925490 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.939931 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.950568 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.950604 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.950613 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.950629 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.950639 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:18Z","lastTransitionTime":"2025-11-27T17:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.957596 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:18 crc kubenswrapper[4792]: I1127 17:10:18.989567 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3251ebd483603c58162288047d2665928177fe9a03b00916ecc12843855d52bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:09Z\\\",\\\"message\\\":\\\" 5996 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1127 17:10:09.290315 5996 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1127 17:10:09.290371 5996 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1127 17:10:09.290436 5996 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1127 17:10:09.290445 5996 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1127 17:10:09.290466 5996 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1127 17:10:09.290476 5996 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1127 17:10:09.290510 5996 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1127 17:10:09.290542 5996 factory.go:656] Stopping watch factory\\\\nI1127 17:10:09.290560 5996 ovnkube.go:599] Stopped ovnkube\\\\nI1127 17:10:09.290589 5996 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1127 17:10:09.290601 5996 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1127 17:10:09.290608 5996 handler.go:208] Removed *v1.Node event handler 2\\\\nI1127 17:10:09.290615 5996 handler.go:208] Removed *v1.Node event handler 7\\\\nI1127 17:10:09.290622 5996 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1127 17:10:09.290628 5996 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:11Z\\\",\\\"message\\\":\\\"vn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094189 6198 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094013 6198 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 1.888145ms\\\\nF1127 17:10:11.094015 6198 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:18Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.052439 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.052474 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.052484 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.052501 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.052510 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:19Z","lastTransitionTime":"2025-11-27T17:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.155523 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.155584 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.155594 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.155608 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.155619 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:19Z","lastTransitionTime":"2025-11-27T17:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.259444 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.259527 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.259555 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.259591 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.259613 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:19Z","lastTransitionTime":"2025-11-27T17:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.362218 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.362323 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.362347 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.362374 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.362392 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:19Z","lastTransitionTime":"2025-11-27T17:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.465922 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.465988 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.466011 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.466040 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.466065 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:19Z","lastTransitionTime":"2025-11-27T17:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.567898 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.567942 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.567952 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.567968 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.567978 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:19Z","lastTransitionTime":"2025-11-27T17:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.670523 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.670603 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.670626 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.670722 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.670749 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:19Z","lastTransitionTime":"2025-11-27T17:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.685687 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.685767 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:19 crc kubenswrapper[4792]: E1127 17:10:19.685815 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:19 crc kubenswrapper[4792]: E1127 17:10:19.685905 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.772835 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.772899 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.772924 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.772955 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.772980 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:19Z","lastTransitionTime":"2025-11-27T17:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.875942 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.875991 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.876004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.876025 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.876039 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:19Z","lastTransitionTime":"2025-11-27T17:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.978719 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.978767 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.978779 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.978795 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:19 crc kubenswrapper[4792]: I1127 17:10:19.978808 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:19Z","lastTransitionTime":"2025-11-27T17:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.081630 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.081718 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.081737 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.081762 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.081782 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:20Z","lastTransitionTime":"2025-11-27T17:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.184595 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.184660 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.184670 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.184685 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.184695 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:20Z","lastTransitionTime":"2025-11-27T17:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.286946 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.286995 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.287004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.287018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.287030 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:20Z","lastTransitionTime":"2025-11-27T17:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.389686 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.390072 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.390224 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.390401 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.390523 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:20Z","lastTransitionTime":"2025-11-27T17:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.493747 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.493813 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.493830 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.493856 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.493876 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:20Z","lastTransitionTime":"2025-11-27T17:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.597364 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.597435 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.597455 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.597479 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.597496 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:20Z","lastTransitionTime":"2025-11-27T17:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.685834 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:20 crc kubenswrapper[4792]: E1127 17:10:20.685979 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.685835 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:20 crc kubenswrapper[4792]: E1127 17:10:20.686162 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.700226 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.700282 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.700304 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.700326 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.700340 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:20Z","lastTransitionTime":"2025-11-27T17:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.803375 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.803430 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.803446 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.803470 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.803488 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:20Z","lastTransitionTime":"2025-11-27T17:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.906588 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.906685 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.906704 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.906728 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.906756 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:20Z","lastTransitionTime":"2025-11-27T17:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.911990 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.913061 4792 scope.go:117] "RemoveContainer" containerID="7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333" Nov 27 17:10:20 crc kubenswrapper[4792]: E1127 17:10:20.913238 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.946244 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:20Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.968133 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:20Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:20 crc kubenswrapper[4792]: I1127 17:10:20.986311 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:20Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.002389 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:21Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.009087 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.009180 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.009199 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.009222 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.009240 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:21Z","lastTransitionTime":"2025-11-27T17:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.017912 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:21Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.033694 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:21Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.047784 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:21Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.061194 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:21Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.075296 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:21Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.098412 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:21Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.112215 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.112276 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.112291 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.112314 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.112328 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:21Z","lastTransitionTime":"2025-11-27T17:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.132763 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:11Z\\\",\\\"message\\\":\\\"vn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094189 6198 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094013 6198 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 1.888145ms\\\\nF1127 17:10:11.094015 6198 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:21Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.149283 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:21Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.167133 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:21Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.186933 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:21Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.209873 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:21Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.215023 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.215048 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.215055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.215068 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.215076 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:21Z","lastTransitionTime":"2025-11-27T17:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.225587 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:21Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.242028 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:21Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.317892 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.317933 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.317943 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.317960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.317970 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:21Z","lastTransitionTime":"2025-11-27T17:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.421382 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.421493 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.421507 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.421526 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.421574 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:21Z","lastTransitionTime":"2025-11-27T17:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.524729 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.524782 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.524795 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.524814 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.524829 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:21Z","lastTransitionTime":"2025-11-27T17:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.628317 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.628379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.628396 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.628422 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.628440 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:21Z","lastTransitionTime":"2025-11-27T17:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.685701 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.685710 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:21 crc kubenswrapper[4792]: E1127 17:10:21.685850 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:21 crc kubenswrapper[4792]: E1127 17:10:21.686024 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.732108 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.732183 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.732209 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.732232 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.732264 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:21Z","lastTransitionTime":"2025-11-27T17:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.835310 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.835365 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.835389 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.835416 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.835431 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:21Z","lastTransitionTime":"2025-11-27T17:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.937866 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.937953 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.937972 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.937997 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:21 crc kubenswrapper[4792]: I1127 17:10:21.938015 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:21Z","lastTransitionTime":"2025-11-27T17:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.041430 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.041473 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.041483 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.041517 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.041528 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:22Z","lastTransitionTime":"2025-11-27T17:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.143728 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.143811 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.143846 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.143879 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.143936 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:22Z","lastTransitionTime":"2025-11-27T17:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.246534 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.246581 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.246596 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.246614 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.246628 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:22Z","lastTransitionTime":"2025-11-27T17:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.350223 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.350317 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.350344 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.350373 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.350393 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:22Z","lastTransitionTime":"2025-11-27T17:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.453276 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.453333 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.453348 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.453371 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.453390 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:22Z","lastTransitionTime":"2025-11-27T17:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.557008 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.557105 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.557131 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.557161 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.557181 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:22Z","lastTransitionTime":"2025-11-27T17:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.660067 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.660124 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.660147 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.660176 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.660198 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:22Z","lastTransitionTime":"2025-11-27T17:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.686756 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.686773 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:22 crc kubenswrapper[4792]: E1127 17:10:22.686964 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:22 crc kubenswrapper[4792]: E1127 17:10:22.687297 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.763069 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.763142 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.763167 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.763196 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.763219 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:22Z","lastTransitionTime":"2025-11-27T17:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.865448 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.865481 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.865489 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.865508 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.865558 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:22Z","lastTransitionTime":"2025-11-27T17:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.968803 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.968855 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.968867 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.968886 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:22 crc kubenswrapper[4792]: I1127 17:10:22.968899 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:22Z","lastTransitionTime":"2025-11-27T17:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.071605 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.072017 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.072149 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.072278 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.072397 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:23Z","lastTransitionTime":"2025-11-27T17:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.175132 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.175201 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.175220 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.175245 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.175262 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:23Z","lastTransitionTime":"2025-11-27T17:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.278275 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.278336 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.278348 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.278374 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.278392 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:23Z","lastTransitionTime":"2025-11-27T17:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.381687 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.381935 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.381954 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.381978 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.381996 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:23Z","lastTransitionTime":"2025-11-27T17:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.484756 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.484830 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.484854 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.484889 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.484914 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:23Z","lastTransitionTime":"2025-11-27T17:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.587605 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.587713 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.587737 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.587764 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.587784 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:23Z","lastTransitionTime":"2025-11-27T17:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.686508 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.686593 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:23 crc kubenswrapper[4792]: E1127 17:10:23.686745 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:23 crc kubenswrapper[4792]: E1127 17:10:23.686868 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.690555 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.690609 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.690624 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.690664 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.690680 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:23Z","lastTransitionTime":"2025-11-27T17:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.793224 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.793304 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.793323 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.793349 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.793368 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:23Z","lastTransitionTime":"2025-11-27T17:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.895954 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.896018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.896047 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.896078 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.896106 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:23Z","lastTransitionTime":"2025-11-27T17:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.998214 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.998270 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.998287 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.998310 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:23 crc kubenswrapper[4792]: I1127 17:10:23.998329 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:23Z","lastTransitionTime":"2025-11-27T17:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.101505 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.101552 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.101566 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.101584 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.101595 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:24Z","lastTransitionTime":"2025-11-27T17:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.205044 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.205121 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.205145 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.205177 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.205200 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:24Z","lastTransitionTime":"2025-11-27T17:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.307014 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.307065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.307077 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.307099 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.307115 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:24Z","lastTransitionTime":"2025-11-27T17:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.409971 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.410004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.410016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.410032 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.410044 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:24Z","lastTransitionTime":"2025-11-27T17:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.512761 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.512818 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.512837 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.512862 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.512880 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:24Z","lastTransitionTime":"2025-11-27T17:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.619529 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.619620 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.619689 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.619727 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.619752 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:24Z","lastTransitionTime":"2025-11-27T17:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.646500 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.646555 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.646567 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.646589 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.646607 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:24Z","lastTransitionTime":"2025-11-27T17:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:24 crc kubenswrapper[4792]: E1127 17:10:24.661688 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:24Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.666252 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.666306 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.666320 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.666342 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.666356 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:24Z","lastTransitionTime":"2025-11-27T17:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:24 crc kubenswrapper[4792]: E1127 17:10:24.683715 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:24Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.685934 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.685933 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:24 crc kubenswrapper[4792]: E1127 17:10:24.686066 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:24 crc kubenswrapper[4792]: E1127 17:10:24.686223 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.688580 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.688629 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.688669 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.688696 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.688735 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:24Z","lastTransitionTime":"2025-11-27T17:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:24 crc kubenswrapper[4792]: E1127 17:10:24.709634 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:24Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.714838 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.715012 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.715102 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.715215 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.715359 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:24Z","lastTransitionTime":"2025-11-27T17:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:24 crc kubenswrapper[4792]: E1127 17:10:24.730369 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:24Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.735191 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.735271 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.735288 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.735313 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.735329 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:24Z","lastTransitionTime":"2025-11-27T17:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:24 crc kubenswrapper[4792]: E1127 17:10:24.752847 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:24Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:24 crc kubenswrapper[4792]: E1127 17:10:24.753064 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.755112 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.755177 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.755197 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.755221 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.755237 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:24Z","lastTransitionTime":"2025-11-27T17:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.858747 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.858814 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.858834 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.859275 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.859324 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:24Z","lastTransitionTime":"2025-11-27T17:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.962869 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.963025 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.963051 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.963081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:24 crc kubenswrapper[4792]: I1127 17:10:24.963102 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:24Z","lastTransitionTime":"2025-11-27T17:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.066135 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.066186 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.066202 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.066221 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.066234 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:25Z","lastTransitionTime":"2025-11-27T17:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.170294 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.170394 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.170418 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.170450 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.170472 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:25Z","lastTransitionTime":"2025-11-27T17:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.273254 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.273319 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.273341 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.273371 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.273393 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:25Z","lastTransitionTime":"2025-11-27T17:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.376428 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.376470 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.376481 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.376499 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.376509 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:25Z","lastTransitionTime":"2025-11-27T17:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.478378 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.478422 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.478433 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.478449 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.478462 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:25Z","lastTransitionTime":"2025-11-27T17:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.580466 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.580505 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.580517 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.580535 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.580547 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:25Z","lastTransitionTime":"2025-11-27T17:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.682751 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.682790 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.682805 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.682826 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.682841 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:25Z","lastTransitionTime":"2025-11-27T17:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.686210 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.686222 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:25 crc kubenswrapper[4792]: E1127 17:10:25.686336 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:25 crc kubenswrapper[4792]: E1127 17:10:25.686456 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.785539 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.785591 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.785602 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.785619 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.785631 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:25Z","lastTransitionTime":"2025-11-27T17:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.888794 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.888860 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.888878 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.888902 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.888920 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:25Z","lastTransitionTime":"2025-11-27T17:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.991064 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.991106 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.991118 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.991137 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:25 crc kubenswrapper[4792]: I1127 17:10:25.991148 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:25Z","lastTransitionTime":"2025-11-27T17:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.094314 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.094387 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.094412 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.094465 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.094484 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:26Z","lastTransitionTime":"2025-11-27T17:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.197081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.197141 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.197162 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.197185 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.197202 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:26Z","lastTransitionTime":"2025-11-27T17:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.299695 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.299741 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.299750 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.299771 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.299785 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:26Z","lastTransitionTime":"2025-11-27T17:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.402792 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.402851 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.402867 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.402891 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.402909 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:26Z","lastTransitionTime":"2025-11-27T17:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.506058 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.506091 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.506101 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.506117 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.506127 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:26Z","lastTransitionTime":"2025-11-27T17:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.609117 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.609162 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.609178 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.609200 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.609217 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:26Z","lastTransitionTime":"2025-11-27T17:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.685867 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:26 crc kubenswrapper[4792]: E1127 17:10:26.686294 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.685940 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:26 crc kubenswrapper[4792]: E1127 17:10:26.687024 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.712184 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.712237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.712255 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.712279 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.712296 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:26Z","lastTransitionTime":"2025-11-27T17:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.814908 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.814986 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.815011 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.815035 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.815052 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:26Z","lastTransitionTime":"2025-11-27T17:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.917237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.917313 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.917337 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.917369 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:26 crc kubenswrapper[4792]: I1127 17:10:26.917394 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:26Z","lastTransitionTime":"2025-11-27T17:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.021345 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.021865 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.022152 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.022267 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.022374 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:27Z","lastTransitionTime":"2025-11-27T17:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.125117 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.125190 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.125211 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.125234 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.125251 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:27Z","lastTransitionTime":"2025-11-27T17:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.228844 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.229221 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.229381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.229533 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.229706 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:27Z","lastTransitionTime":"2025-11-27T17:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.332702 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.332774 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.332795 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.332826 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.332848 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:27Z","lastTransitionTime":"2025-11-27T17:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.435611 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.435674 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.435689 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.435705 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.435717 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:27Z","lastTransitionTime":"2025-11-27T17:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.538694 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.538759 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.538776 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.538806 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.538824 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:27Z","lastTransitionTime":"2025-11-27T17:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.641386 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.641447 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.641468 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.641494 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.641511 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:27Z","lastTransitionTime":"2025-11-27T17:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.686236 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:27 crc kubenswrapper[4792]: E1127 17:10:27.686413 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.686237 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:27 crc kubenswrapper[4792]: E1127 17:10:27.687225 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.744338 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.744721 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.744949 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.745138 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.745300 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:27Z","lastTransitionTime":"2025-11-27T17:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.847450 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.847533 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.847561 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.847575 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.847585 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:27Z","lastTransitionTime":"2025-11-27T17:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.950465 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.950507 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.950516 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.950531 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:27 crc kubenswrapper[4792]: I1127 17:10:27.950541 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:27Z","lastTransitionTime":"2025-11-27T17:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.054058 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.054112 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.054125 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.054144 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.054157 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:28Z","lastTransitionTime":"2025-11-27T17:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.157309 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.157432 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.157458 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.157485 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.157505 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:28Z","lastTransitionTime":"2025-11-27T17:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.260669 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.260718 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.260727 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.260760 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.260770 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:28Z","lastTransitionTime":"2025-11-27T17:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.363503 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.363549 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.363560 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.363575 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.363586 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:28Z","lastTransitionTime":"2025-11-27T17:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.466251 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.466290 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.466299 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.466313 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.466325 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:28Z","lastTransitionTime":"2025-11-27T17:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.569311 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.569377 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.569394 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.569417 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.569433 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:28Z","lastTransitionTime":"2025-11-27T17:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.672514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.672570 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.672587 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.672609 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.672629 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:28Z","lastTransitionTime":"2025-11-27T17:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.686526 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.686592 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:28 crc kubenswrapper[4792]: E1127 17:10:28.687128 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:28 crc kubenswrapper[4792]: E1127 17:10:28.687529 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.710000 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.733895 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.750989 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.768988 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.775820 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.775905 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.775927 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.775962 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.775984 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:28Z","lastTransitionTime":"2025-11-27T17:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.789460 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.809155 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.826230 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.845899 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.869451 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.879295 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.879348 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.879360 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.879378 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.879391 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:28Z","lastTransitionTime":"2025-11-27T17:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.887891 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.901143 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.914556 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.925707 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.939787 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.963239 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:11Z\\\",\\\"message\\\":\\\"vn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094189 6198 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094013 6198 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 1.888145ms\\\\nF1127 17:10:11.094015 6198 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.980012 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.981777 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.981844 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.981862 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.981886 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.981899 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:28Z","lastTransitionTime":"2025-11-27T17:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:28 crc kubenswrapper[4792]: I1127 17:10:28.998379 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:28Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.072766 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.084622 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.084692 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.084704 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.084725 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.084737 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:29Z","lastTransitionTime":"2025-11-27T17:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.088935 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.100907 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.119327 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.136403 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.162438 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:11Z\\\",\\\"message\\\":\\\"vn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094189 6198 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094013 6198 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 1.888145ms\\\\nF1127 17:10:11.094015 6198 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.179035 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.188239 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.188296 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.188311 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.188331 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.188351 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:29Z","lastTransitionTime":"2025-11-27T17:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.201259 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.213734 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.233120 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.250252 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.265302 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.286095 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.291312 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.291574 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.291590 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.291605 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.291615 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:29Z","lastTransitionTime":"2025-11-27T17:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.303509 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.315591 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.328248 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.350024 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.371304 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.387584 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:29Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.394556 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.394621 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.394640 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.394696 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.394717 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:29Z","lastTransitionTime":"2025-11-27T17:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.497247 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.497313 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.497331 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.497362 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.497386 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:29Z","lastTransitionTime":"2025-11-27T17:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.599968 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.600030 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.600048 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.600074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.600092 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:29Z","lastTransitionTime":"2025-11-27T17:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.686235 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.686235 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:29 crc kubenswrapper[4792]: E1127 17:10:29.686367 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:29 crc kubenswrapper[4792]: E1127 17:10:29.686433 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.703881 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.703941 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.703960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.703989 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.704012 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:29Z","lastTransitionTime":"2025-11-27T17:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.807203 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.807623 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.807894 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.808050 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.808199 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:29Z","lastTransitionTime":"2025-11-27T17:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.911005 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.911055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.911074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.911100 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:29 crc kubenswrapper[4792]: I1127 17:10:29.911119 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:29Z","lastTransitionTime":"2025-11-27T17:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.013029 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.013069 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.013079 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.013094 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.013103 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:30Z","lastTransitionTime":"2025-11-27T17:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.116225 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.116272 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.116291 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.116313 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.116330 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:30Z","lastTransitionTime":"2025-11-27T17:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.219119 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.219173 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.219187 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.219206 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.219221 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:30Z","lastTransitionTime":"2025-11-27T17:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.321478 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.321539 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.321555 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.321578 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.321595 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:30Z","lastTransitionTime":"2025-11-27T17:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.424741 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.424793 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.424803 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.424819 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.424830 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:30Z","lastTransitionTime":"2025-11-27T17:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.466465 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:10:30 crc kubenswrapper[4792]: E1127 17:10:30.466680 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:11:02.466623962 +0000 UTC m=+84.809450310 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.527865 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.527929 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.527947 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.527970 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.527988 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:30Z","lastTransitionTime":"2025-11-27T17:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.568368 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.568438 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.568515 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.568551 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:30 crc kubenswrapper[4792]: E1127 17:10:30.568704 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:10:30 crc kubenswrapper[4792]: E1127 17:10:30.568710 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:10:30 crc kubenswrapper[4792]: E1127 17:10:30.568734 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:10:30 crc kubenswrapper[4792]: E1127 17:10:30.568817 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:10:30 crc kubenswrapper[4792]: E1127 17:10:30.568913 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:30 crc kubenswrapper[4792]: E1127 17:10:30.568824 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:10:30 crc kubenswrapper[4792]: E1127 17:10:30.569000 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:10:30 crc kubenswrapper[4792]: E1127 17:10:30.569019 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:30 crc kubenswrapper[4792]: E1127 17:10:30.568787 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:11:02.568765777 +0000 UTC m=+84.911592135 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:10:30 crc kubenswrapper[4792]: E1127 17:10:30.569099 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:11:02.569083554 +0000 UTC m=+84.911909912 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:10:30 crc kubenswrapper[4792]: E1127 17:10:30.569124 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-27 17:11:02.569113115 +0000 UTC m=+84.911939473 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:30 crc kubenswrapper[4792]: E1127 17:10:30.569561 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-27 17:11:02.569520964 +0000 UTC m=+84.912347452 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.631085 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.631154 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.631171 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.631196 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.631222 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:30Z","lastTransitionTime":"2025-11-27T17:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.686016 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.686174 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:30 crc kubenswrapper[4792]: E1127 17:10:30.686326 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:30 crc kubenswrapper[4792]: E1127 17:10:30.686559 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.733960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.734020 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.734036 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.734062 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.734084 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:30Z","lastTransitionTime":"2025-11-27T17:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.837459 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.837508 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.837526 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.837549 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.837567 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:30Z","lastTransitionTime":"2025-11-27T17:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.941378 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.941431 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.941443 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.941465 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:30 crc kubenswrapper[4792]: I1127 17:10:30.941476 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:30Z","lastTransitionTime":"2025-11-27T17:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.044483 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.044525 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.044536 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.044551 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.044562 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:31Z","lastTransitionTime":"2025-11-27T17:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.147561 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.147609 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.147625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.147665 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.147678 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:31Z","lastTransitionTime":"2025-11-27T17:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.176139 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs\") pod \"network-metrics-daemon-5qmhg\" (UID: \"2ec75c0b-1943-49d4-8813-bf8cc5218511\") " pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:31 crc kubenswrapper[4792]: E1127 17:10:31.176305 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:10:31 crc kubenswrapper[4792]: E1127 17:10:31.176356 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs podName:2ec75c0b-1943-49d4-8813-bf8cc5218511 nodeName:}" failed. No retries permitted until 2025-11-27 17:11:03.176343212 +0000 UTC m=+85.519169530 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs") pod "network-metrics-daemon-5qmhg" (UID: "2ec75c0b-1943-49d4-8813-bf8cc5218511") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.250935 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.250981 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.250994 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.251030 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.251043 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:31Z","lastTransitionTime":"2025-11-27T17:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.353538 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.353585 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.353597 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.353615 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.353630 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:31Z","lastTransitionTime":"2025-11-27T17:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.456740 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.456802 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.456819 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.456842 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.456859 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:31Z","lastTransitionTime":"2025-11-27T17:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.560190 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.560277 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.560301 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.560355 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.560374 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:31Z","lastTransitionTime":"2025-11-27T17:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.662955 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.663019 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.663045 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.663072 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.663092 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:31Z","lastTransitionTime":"2025-11-27T17:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.685925 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.685987 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:31 crc kubenswrapper[4792]: E1127 17:10:31.686132 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:31 crc kubenswrapper[4792]: E1127 17:10:31.686272 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.766296 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.766388 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.766416 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.766446 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.766469 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:31Z","lastTransitionTime":"2025-11-27T17:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.869996 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.870048 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.870066 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.870092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.870110 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:31Z","lastTransitionTime":"2025-11-27T17:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.973298 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.973372 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.973388 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.973431 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:31 crc kubenswrapper[4792]: I1127 17:10:31.973450 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:31Z","lastTransitionTime":"2025-11-27T17:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.076641 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.076753 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.076772 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.076796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.076837 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:32Z","lastTransitionTime":"2025-11-27T17:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.179598 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.179704 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.179730 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.179758 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.179778 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:32Z","lastTransitionTime":"2025-11-27T17:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.282994 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.283082 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.283114 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.283161 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.283183 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:32Z","lastTransitionTime":"2025-11-27T17:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.386356 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.386399 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.386409 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.386429 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.386439 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:32Z","lastTransitionTime":"2025-11-27T17:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.489143 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.489216 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.489235 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.489258 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.489275 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:32Z","lastTransitionTime":"2025-11-27T17:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.591893 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.591972 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.591995 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.592025 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.592047 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:32Z","lastTransitionTime":"2025-11-27T17:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.686269 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:32 crc kubenswrapper[4792]: E1127 17:10:32.686470 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.686543 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:32 crc kubenswrapper[4792]: E1127 17:10:32.686847 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.695238 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.695377 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.695391 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.695407 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.695417 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:32Z","lastTransitionTime":"2025-11-27T17:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.798916 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.798979 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.799001 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.799032 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.799055 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:32Z","lastTransitionTime":"2025-11-27T17:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.903311 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.903385 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.903405 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.903430 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:32 crc kubenswrapper[4792]: I1127 17:10:32.903450 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:32Z","lastTransitionTime":"2025-11-27T17:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.005357 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.005405 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.005421 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.005446 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.005465 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:33Z","lastTransitionTime":"2025-11-27T17:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.107842 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.107885 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.107897 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.107915 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.107927 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:33Z","lastTransitionTime":"2025-11-27T17:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.210678 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.210714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.210721 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.210733 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.210742 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:33Z","lastTransitionTime":"2025-11-27T17:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.313837 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.313911 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.313935 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.313964 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.313987 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:33Z","lastTransitionTime":"2025-11-27T17:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.415977 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.416010 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.416021 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.416037 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.416048 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:33Z","lastTransitionTime":"2025-11-27T17:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.518985 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.519054 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.519080 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.519124 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.519152 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:33Z","lastTransitionTime":"2025-11-27T17:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.622500 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.622567 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.622593 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.622630 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.622692 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:33Z","lastTransitionTime":"2025-11-27T17:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.686461 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.686532 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:33 crc kubenswrapper[4792]: E1127 17:10:33.686677 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:33 crc kubenswrapper[4792]: E1127 17:10:33.686820 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.726180 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.726251 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.726272 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.726301 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.726323 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:33Z","lastTransitionTime":"2025-11-27T17:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.828973 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.829038 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.829050 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.829068 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.829081 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:33Z","lastTransitionTime":"2025-11-27T17:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.931763 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.931812 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.931829 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.931851 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:33 crc kubenswrapper[4792]: I1127 17:10:33.931868 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:33Z","lastTransitionTime":"2025-11-27T17:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.034508 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.034583 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.034601 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.034625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.034681 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:34Z","lastTransitionTime":"2025-11-27T17:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.137131 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.137208 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.137229 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.137287 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.137308 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:34Z","lastTransitionTime":"2025-11-27T17:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.240159 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.240246 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.240272 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.240304 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.240328 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:34Z","lastTransitionTime":"2025-11-27T17:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.343728 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.343797 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.343822 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.343852 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.343871 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:34Z","lastTransitionTime":"2025-11-27T17:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.446851 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.446928 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.446951 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.446983 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.447008 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:34Z","lastTransitionTime":"2025-11-27T17:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.550872 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.550956 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.550975 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.551001 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.551019 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:34Z","lastTransitionTime":"2025-11-27T17:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.654579 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.654707 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.654727 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.654754 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.654771 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:34Z","lastTransitionTime":"2025-11-27T17:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.686683 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:34 crc kubenswrapper[4792]: E1127 17:10:34.686866 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.686898 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:34 crc kubenswrapper[4792]: E1127 17:10:34.687407 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.687898 4792 scope.go:117] "RemoveContainer" containerID="7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.758619 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.759063 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.759090 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.759121 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.759150 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:34Z","lastTransitionTime":"2025-11-27T17:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.863228 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.863260 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.863270 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.863289 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.863300 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:34Z","lastTransitionTime":"2025-11-27T17:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.965107 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.965169 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.965180 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.965196 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:34 crc kubenswrapper[4792]: I1127 17:10:34.965208 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:34Z","lastTransitionTime":"2025-11-27T17:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.016792 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovnkube-controller/1.log" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.020950 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerStarted","Data":"222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696"} Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.021518 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.037387 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.057081 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.067688 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.067760 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.067784 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.067813 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.067838 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:35Z","lastTransitionTime":"2025-11-27T17:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.075859 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.090818 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8c1f3-5285-46c6-9ecf-2a9a8d8b913b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37142b60c3f7602dce42d9af68452758dc9f8285471d39bcd317d1fc6385d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4883cae4263c4d72157da41d2859a7beef96d5ddc0f8f22ec70818cd82dbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5269565766d88e342561fc7ff70f8cf278e2acafacb637f8009a114a88e4a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.115183 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.133364 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.142946 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.142987 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.142999 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.143019 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.143032 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:35Z","lastTransitionTime":"2025-11-27T17:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:35 crc kubenswrapper[4792]: E1127 17:10:35.158194 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.162723 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.162755 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.162766 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.162783 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.162795 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:35Z","lastTransitionTime":"2025-11-27T17:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.172669 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: E1127 17:10:35.185030 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.191333 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.191379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.191387 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.191402 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.191413 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:35Z","lastTransitionTime":"2025-11-27T17:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.196510 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: E1127 17:10:35.210858 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.216742 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.217254 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.217339 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.217418 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.217483 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.217550 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:35Z","lastTransitionTime":"2025-11-27T17:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:35 crc kubenswrapper[4792]: E1127 17:10:35.230014 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.231640 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.234390 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.234424 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.234436 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.234455 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.234467 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:35Z","lastTransitionTime":"2025-11-27T17:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:35 crc kubenswrapper[4792]: E1127 17:10:35.246106 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: E1127 17:10:35.246330 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.247183 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.248046 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.248105 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.248122 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.248143 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.248160 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:35Z","lastTransitionTime":"2025-11-27T17:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.263981 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:11Z\\\",\\\"message\\\":\\\"vn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094189 6198 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094013 6198 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 1.888145ms\\\\nF1127 17:10:11.094015 6198 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.275284 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.288036 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.299471 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.314567 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.327861 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.341219 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.351052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.351083 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.351093 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.351109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.351119 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:35Z","lastTransitionTime":"2025-11-27T17:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.453227 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.453272 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.453283 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.453298 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.453308 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:35Z","lastTransitionTime":"2025-11-27T17:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.555639 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.555744 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.555772 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.555803 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.555827 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:35Z","lastTransitionTime":"2025-11-27T17:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.658411 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.658447 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.658460 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.658476 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.658489 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:35Z","lastTransitionTime":"2025-11-27T17:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.686278 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.686278 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:35 crc kubenswrapper[4792]: E1127 17:10:35.686385 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:35 crc kubenswrapper[4792]: E1127 17:10:35.686449 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.762191 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.762256 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.762281 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.762309 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.762330 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:35Z","lastTransitionTime":"2025-11-27T17:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.865776 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.865814 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.865825 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.865841 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.865852 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:35Z","lastTransitionTime":"2025-11-27T17:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.968466 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.968533 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.968549 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.968573 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:35 crc kubenswrapper[4792]: I1127 17:10:35.968591 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:35Z","lastTransitionTime":"2025-11-27T17:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.027339 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovnkube-controller/2.log" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.028312 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovnkube-controller/1.log" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.032106 4792 generic.go:334] "Generic (PLEG): container finished" podID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerID="222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696" exitCode=1 Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.032163 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerDied","Data":"222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696"} Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.032214 4792 scope.go:117] "RemoveContainer" containerID="7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.033295 4792 scope.go:117] "RemoveContainer" containerID="222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696" Nov 27 17:10:36 crc kubenswrapper[4792]: E1127 17:10:36.033580 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.051631 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.070554 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.070591 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.070601 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.070616 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.070626 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:36Z","lastTransitionTime":"2025-11-27T17:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.082835 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7983ea06c4474a2e8842ba98fd0152fb43868d0e7286718ec56439a5e9cdc333\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:11Z\\\",\\\"message\\\":\\\"vn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094189 6198 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1127 17:10:11.094013 6198 services_controller.go:360] Finished syncing service machine-config-daemon on namespace openshift-machine-config-operator for network=default : 1.888145ms\\\\nF1127 17:10:11.094015 6198 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"35.595901 6474 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1127 17:10:35.595928 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z]\\\\nI1127 17:10:35.595939 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-56bcx\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-gbrqr\\\\nI1127 17\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.096687 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.107471 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.117706 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.129111 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.139528 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.152339 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.163333 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.173737 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.173808 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.173833 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.173862 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.173882 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:36Z","lastTransitionTime":"2025-11-27T17:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.176464 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.188988 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.205314 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8c1f3-5285-46c6-9ecf-2a9a8d8b913b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37142b60c3f7602dce42d9af68452758dc9f8285471d39bcd317d1fc6385d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4883cae4263c4d72157da41d2859a7beef96d5ddc0f8f22ec70818cd82dbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5269565766d88e342561fc7ff70f8cf278e2acafacb637f8009a114a88e4a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.225369 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.238018 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.255402 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.269487 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.276744 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.276796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.276814 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.276839 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.276855 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:36Z","lastTransitionTime":"2025-11-27T17:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.281631 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.292361 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:36Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.379785 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.379855 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.379878 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.379910 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.379930 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:36Z","lastTransitionTime":"2025-11-27T17:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.483335 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.483412 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.483439 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.483468 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.483492 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:36Z","lastTransitionTime":"2025-11-27T17:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.586404 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.586484 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.586502 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.586942 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.587252 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:36Z","lastTransitionTime":"2025-11-27T17:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.685915 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.685961 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:36 crc kubenswrapper[4792]: E1127 17:10:36.686114 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:36 crc kubenswrapper[4792]: E1127 17:10:36.686235 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.690164 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.690273 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.690292 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.690316 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.690332 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:36Z","lastTransitionTime":"2025-11-27T17:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.793765 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.793831 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.793853 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.793885 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.793907 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:36Z","lastTransitionTime":"2025-11-27T17:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.896030 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.896088 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.896109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.896168 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.896191 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:36Z","lastTransitionTime":"2025-11-27T17:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.998833 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.998886 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.998899 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.998916 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:36 crc kubenswrapper[4792]: I1127 17:10:36.998928 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:36Z","lastTransitionTime":"2025-11-27T17:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.037449 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovnkube-controller/2.log" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.041453 4792 scope.go:117] "RemoveContainer" containerID="222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696" Nov 27 17:10:37 crc kubenswrapper[4792]: E1127 17:10:37.041736 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.066550 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.080087 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.094375 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.102199 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.102402 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.102459 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.102519 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.102611 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:37Z","lastTransitionTime":"2025-11-27T17:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.107472 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.125762 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.151467 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"35.595901 6474 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1127 17:10:35.595928 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z]\\\\nI1127 17:10:35.595939 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-56bcx\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-gbrqr\\\\nI1127 17\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.165286 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.179598 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.190238 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.203016 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.204769 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.204847 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.204867 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.204889 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.204934 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:37Z","lastTransitionTime":"2025-11-27T17:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.215808 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.229302 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.240933 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.257038 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.272597 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.284876 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8c1f3-5285-46c6-9ecf-2a9a8d8b913b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37142b60c3f7602dce42d9af68452758dc9f8285471d39bcd317d1fc6385d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4883cae4263c4d72157da41d2859a7beef96d5ddc0f8f22ec70818cd82dbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5269565766d88e342561fc7ff70f8cf278e2acafacb637f8009a114a88e4a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.301225 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.307486 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.307515 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.307524 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.307536 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.307548 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:37Z","lastTransitionTime":"2025-11-27T17:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.312815 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:37Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.409596 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.409664 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.409679 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.409695 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.409731 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:37Z","lastTransitionTime":"2025-11-27T17:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.513022 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.513075 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.513092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.513115 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.513131 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:37Z","lastTransitionTime":"2025-11-27T17:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.615938 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.615997 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.616014 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.616043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.616061 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:37Z","lastTransitionTime":"2025-11-27T17:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.686665 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.686689 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:37 crc kubenswrapper[4792]: E1127 17:10:37.686812 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:37 crc kubenswrapper[4792]: E1127 17:10:37.686958 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.719407 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.719492 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.719518 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.719568 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.719594 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:37Z","lastTransitionTime":"2025-11-27T17:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.823074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.823125 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.823146 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.823176 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.823241 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:37Z","lastTransitionTime":"2025-11-27T17:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.926879 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.926961 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.926985 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.927014 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:37 crc kubenswrapper[4792]: I1127 17:10:37.927043 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:37Z","lastTransitionTime":"2025-11-27T17:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.030603 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.030734 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.030759 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.030796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.030818 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:38Z","lastTransitionTime":"2025-11-27T17:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.134832 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.134897 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.134919 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.134945 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.134962 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:38Z","lastTransitionTime":"2025-11-27T17:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.237948 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.237982 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.237990 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.238004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.238014 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:38Z","lastTransitionTime":"2025-11-27T17:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.341197 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.341270 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.341285 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.341305 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.341318 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:38Z","lastTransitionTime":"2025-11-27T17:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.444399 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.444462 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.444481 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.444507 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.444526 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:38Z","lastTransitionTime":"2025-11-27T17:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.547458 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.547523 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.547541 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.547605 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.547625 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:38Z","lastTransitionTime":"2025-11-27T17:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.650577 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.650623 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.650633 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.650669 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.650689 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:38Z","lastTransitionTime":"2025-11-27T17:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.685796 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:38 crc kubenswrapper[4792]: E1127 17:10:38.685901 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.686471 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:38 crc kubenswrapper[4792]: E1127 17:10:38.686889 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.709629 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.723999 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.740074 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8c1f3-5285-46c6-9ecf-2a9a8d8b913b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37142b60c3f7602dce42d9af68452758dc9f8285471d39bcd317d1fc6385d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4883cae4263c4d72157da41d2859a7beef96d5ddc0f8f22ec70818cd82dbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5269565766d88e342561fc7ff70f8cf278e2acafacb637f8009a114a88e4a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.754275 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.754729 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.754888 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.755042 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.755627 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:38Z","lastTransitionTime":"2025-11-27T17:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.762936 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.776297 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.796337 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.817887 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.834193 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.873337 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.873494 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.873710 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.873733 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.873784 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.873803 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:38Z","lastTransitionTime":"2025-11-27T17:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.896339 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.925994 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.937794 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.947733 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.958140 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.970619 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.975895 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.975953 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.975970 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.975995 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.976015 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:38Z","lastTransitionTime":"2025-11-27T17:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:38 crc kubenswrapper[4792]: I1127 17:10:38.986041 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:38Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.005524 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"35.595901 6474 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1127 17:10:35.595928 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z]\\\\nI1127 17:10:35.595939 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-56bcx\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-gbrqr\\\\nI1127 17\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:39Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.017909 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:39Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.078672 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.079033 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.079055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.079088 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.079110 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:39Z","lastTransitionTime":"2025-11-27T17:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.181429 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.181493 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.181517 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.181544 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.181564 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:39Z","lastTransitionTime":"2025-11-27T17:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.284123 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.284208 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.284236 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.284258 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.284275 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:39Z","lastTransitionTime":"2025-11-27T17:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.387447 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.387492 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.387502 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.387518 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.387529 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:39Z","lastTransitionTime":"2025-11-27T17:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.489793 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.489842 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.489853 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.489870 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.489883 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:39Z","lastTransitionTime":"2025-11-27T17:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.592726 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.592768 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.592778 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.592793 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.592802 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:39Z","lastTransitionTime":"2025-11-27T17:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.686362 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.686469 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:39 crc kubenswrapper[4792]: E1127 17:10:39.686535 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:39 crc kubenswrapper[4792]: E1127 17:10:39.686687 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.695121 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.695239 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.695256 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.695273 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.695283 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:39Z","lastTransitionTime":"2025-11-27T17:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.797945 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.798006 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.798014 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.798029 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.798039 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:39Z","lastTransitionTime":"2025-11-27T17:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.900791 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.900844 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.900862 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.900882 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:39 crc kubenswrapper[4792]: I1127 17:10:39.900896 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:39Z","lastTransitionTime":"2025-11-27T17:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.003621 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.003778 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.003809 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.003843 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.003862 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:40Z","lastTransitionTime":"2025-11-27T17:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.109525 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.109672 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.109689 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.109720 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.109751 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:40Z","lastTransitionTime":"2025-11-27T17:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.212352 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.212465 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.212493 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.212528 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.212550 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:40Z","lastTransitionTime":"2025-11-27T17:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.316839 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.316913 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.316937 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.316968 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.316993 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:40Z","lastTransitionTime":"2025-11-27T17:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.419766 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.419809 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.419817 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.419833 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.419842 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:40Z","lastTransitionTime":"2025-11-27T17:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.522181 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.522220 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.522229 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.522242 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.522254 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:40Z","lastTransitionTime":"2025-11-27T17:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.625152 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.625195 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.625204 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.625221 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.625232 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:40Z","lastTransitionTime":"2025-11-27T17:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.686132 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.686281 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:40 crc kubenswrapper[4792]: E1127 17:10:40.686321 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:40 crc kubenswrapper[4792]: E1127 17:10:40.686500 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.751882 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.751944 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.751962 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.751984 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.752001 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:40Z","lastTransitionTime":"2025-11-27T17:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.855262 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.855341 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.855367 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.855448 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.855511 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:40Z","lastTransitionTime":"2025-11-27T17:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.958518 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.958577 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.958594 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.958616 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:40 crc kubenswrapper[4792]: I1127 17:10:40.958631 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:40Z","lastTransitionTime":"2025-11-27T17:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.061199 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.061263 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.061319 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.061346 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.061457 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:41Z","lastTransitionTime":"2025-11-27T17:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.164083 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.164133 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.164148 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.164170 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.164182 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:41Z","lastTransitionTime":"2025-11-27T17:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.267294 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.267343 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.267355 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.267375 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.267386 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:41Z","lastTransitionTime":"2025-11-27T17:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.370812 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.370854 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.370865 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.370883 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.370894 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:41Z","lastTransitionTime":"2025-11-27T17:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.473889 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.473925 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.473936 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.473951 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.473963 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:41Z","lastTransitionTime":"2025-11-27T17:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.576730 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.576779 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.576793 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.576811 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.576822 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:41Z","lastTransitionTime":"2025-11-27T17:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.680219 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.680295 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.680312 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.680332 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.680347 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:41Z","lastTransitionTime":"2025-11-27T17:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.686741 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.686846 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:41 crc kubenswrapper[4792]: E1127 17:10:41.686885 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:41 crc kubenswrapper[4792]: E1127 17:10:41.687049 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.783087 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.783148 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.783167 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.783192 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.783210 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:41Z","lastTransitionTime":"2025-11-27T17:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.885920 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.885970 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.885984 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.886002 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.886013 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:41Z","lastTransitionTime":"2025-11-27T17:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.988867 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.988915 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.988930 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.988954 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:41 crc kubenswrapper[4792]: I1127 17:10:41.988968 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:41Z","lastTransitionTime":"2025-11-27T17:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.090771 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.090816 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.090826 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.090841 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.090852 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:42Z","lastTransitionTime":"2025-11-27T17:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.194602 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.194682 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.194701 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.194723 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.194745 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:42Z","lastTransitionTime":"2025-11-27T17:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.297678 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.297814 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.297833 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.297858 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.297874 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:42Z","lastTransitionTime":"2025-11-27T17:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.400702 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.400758 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.400786 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.400810 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.400827 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:42Z","lastTransitionTime":"2025-11-27T17:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.503489 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.503545 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.503570 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.503612 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.503633 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:42Z","lastTransitionTime":"2025-11-27T17:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.606595 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.606713 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.606733 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.606763 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.606779 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:42Z","lastTransitionTime":"2025-11-27T17:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.686618 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:42 crc kubenswrapper[4792]: E1127 17:10:42.686777 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.686916 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:42 crc kubenswrapper[4792]: E1127 17:10:42.687123 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.710170 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.710232 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.710247 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.710273 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.710289 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:42Z","lastTransitionTime":"2025-11-27T17:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.812891 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.812927 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.812937 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.812951 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.812961 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:42Z","lastTransitionTime":"2025-11-27T17:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.915090 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.915148 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.915166 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.915194 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:42 crc kubenswrapper[4792]: I1127 17:10:42.915205 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:42Z","lastTransitionTime":"2025-11-27T17:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.017700 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.017733 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.017741 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.017755 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.017764 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:43Z","lastTransitionTime":"2025-11-27T17:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.120384 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.120438 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.120448 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.120463 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.120472 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:43Z","lastTransitionTime":"2025-11-27T17:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.222743 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.222786 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.222795 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.222810 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.222818 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:43Z","lastTransitionTime":"2025-11-27T17:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.325378 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.325427 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.325439 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.325457 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.325469 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:43Z","lastTransitionTime":"2025-11-27T17:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.428281 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.428333 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.428351 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.428373 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.428390 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:43Z","lastTransitionTime":"2025-11-27T17:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.531616 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.531712 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.531736 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.531767 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.531788 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:43Z","lastTransitionTime":"2025-11-27T17:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.635134 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.635198 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.635217 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.635253 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.635288 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:43Z","lastTransitionTime":"2025-11-27T17:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.685720 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.685830 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:43 crc kubenswrapper[4792]: E1127 17:10:43.685940 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:43 crc kubenswrapper[4792]: E1127 17:10:43.686164 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.738569 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.738672 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.738706 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.738734 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.738757 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:43Z","lastTransitionTime":"2025-11-27T17:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.840380 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.840446 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.840458 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.840473 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.840487 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:43Z","lastTransitionTime":"2025-11-27T17:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.943083 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.943151 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.943339 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.943383 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:43 crc kubenswrapper[4792]: I1127 17:10:43.943406 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:43Z","lastTransitionTime":"2025-11-27T17:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.047127 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.047193 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.047211 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.047236 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.047259 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:44Z","lastTransitionTime":"2025-11-27T17:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.149837 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.149885 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.149900 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.149920 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.149935 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:44Z","lastTransitionTime":"2025-11-27T17:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.252388 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.252438 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.252450 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.252467 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.252480 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:44Z","lastTransitionTime":"2025-11-27T17:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.355109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.355148 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.355157 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.355171 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.355179 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:44Z","lastTransitionTime":"2025-11-27T17:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.456921 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.456967 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.456978 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.456993 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.457005 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:44Z","lastTransitionTime":"2025-11-27T17:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.559019 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.559073 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.559088 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.559108 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.559125 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:44Z","lastTransitionTime":"2025-11-27T17:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.662681 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.662750 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.662808 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.662838 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.662860 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:44Z","lastTransitionTime":"2025-11-27T17:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.686294 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:44 crc kubenswrapper[4792]: E1127 17:10:44.686456 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.686533 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:44 crc kubenswrapper[4792]: E1127 17:10:44.686764 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.765348 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.765390 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.765400 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.765417 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.765428 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:44Z","lastTransitionTime":"2025-11-27T17:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.868484 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.868525 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.868539 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.868554 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.868564 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:44Z","lastTransitionTime":"2025-11-27T17:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.971074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.971161 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.971218 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.971251 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:44 crc kubenswrapper[4792]: I1127 17:10:44.971291 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:44Z","lastTransitionTime":"2025-11-27T17:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.073150 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.073188 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.073197 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.073209 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.073219 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:45Z","lastTransitionTime":"2025-11-27T17:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.176249 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.176303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.176315 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.176333 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.176346 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:45Z","lastTransitionTime":"2025-11-27T17:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.278503 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.278537 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.278544 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.278557 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.278568 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:45Z","lastTransitionTime":"2025-11-27T17:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.350441 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.350492 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.350504 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.350518 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.350530 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:45Z","lastTransitionTime":"2025-11-27T17:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:45 crc kubenswrapper[4792]: E1127 17:10:45.363943 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:45Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.367341 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.367373 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.367381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.367396 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.367405 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:45Z","lastTransitionTime":"2025-11-27T17:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:45 crc kubenswrapper[4792]: E1127 17:10:45.378148 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:45Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.380950 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.380996 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.381013 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.381035 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.381049 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:45Z","lastTransitionTime":"2025-11-27T17:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:45 crc kubenswrapper[4792]: E1127 17:10:45.394738 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:45Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.397863 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.397920 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.397936 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.397954 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.397967 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:45Z","lastTransitionTime":"2025-11-27T17:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:45 crc kubenswrapper[4792]: E1127 17:10:45.410608 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:45Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.413499 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.413533 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.413544 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.413562 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.413573 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:45Z","lastTransitionTime":"2025-11-27T17:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:45 crc kubenswrapper[4792]: E1127 17:10:45.431994 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:45Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:45 crc kubenswrapper[4792]: E1127 17:10:45.432125 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.433752 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.433809 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.433823 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.433843 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.433882 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:45Z","lastTransitionTime":"2025-11-27T17:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.535685 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.535755 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.535770 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.535790 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.535802 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:45Z","lastTransitionTime":"2025-11-27T17:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.638322 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.638364 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.638375 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.638393 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.638403 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:45Z","lastTransitionTime":"2025-11-27T17:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.686451 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.686498 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:45 crc kubenswrapper[4792]: E1127 17:10:45.686626 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:45 crc kubenswrapper[4792]: E1127 17:10:45.686807 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.740867 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.741017 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.741052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.741081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.741106 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:45Z","lastTransitionTime":"2025-11-27T17:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.843052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.843090 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.843102 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.843122 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.843149 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:45Z","lastTransitionTime":"2025-11-27T17:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.945547 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.945594 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.945609 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.945630 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:45 crc kubenswrapper[4792]: I1127 17:10:45.945677 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:45Z","lastTransitionTime":"2025-11-27T17:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.048075 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.048125 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.048137 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.048154 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.048168 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:46Z","lastTransitionTime":"2025-11-27T17:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.151081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.151176 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.151199 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.151228 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.151249 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:46Z","lastTransitionTime":"2025-11-27T17:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.254298 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.254349 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.254366 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.254388 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.254407 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:46Z","lastTransitionTime":"2025-11-27T17:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.357027 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.357068 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.357082 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.357103 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.357120 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:46Z","lastTransitionTime":"2025-11-27T17:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.460201 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.460249 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.460262 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.460280 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.460291 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:46Z","lastTransitionTime":"2025-11-27T17:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.562301 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.562349 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.562361 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.562377 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.562388 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:46Z","lastTransitionTime":"2025-11-27T17:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.664963 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.664992 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.665003 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.665017 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.665028 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:46Z","lastTransitionTime":"2025-11-27T17:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.685770 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.685793 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:46 crc kubenswrapper[4792]: E1127 17:10:46.685925 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:46 crc kubenswrapper[4792]: E1127 17:10:46.686070 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.766497 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.766537 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.766549 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.766565 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.766576 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:46Z","lastTransitionTime":"2025-11-27T17:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.869749 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.869827 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.869854 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.869884 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.869912 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:46Z","lastTransitionTime":"2025-11-27T17:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.973888 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.973943 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.973958 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.973979 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:46 crc kubenswrapper[4792]: I1127 17:10:46.973991 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:46Z","lastTransitionTime":"2025-11-27T17:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.074466 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gbrqr_71907161-f8b0-4b44-b61a-0e04200083f0/kube-multus/0.log" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.074542 4792 generic.go:334] "Generic (PLEG): container finished" podID="71907161-f8b0-4b44-b61a-0e04200083f0" containerID="fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7" exitCode=1 Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.074593 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gbrqr" event={"ID":"71907161-f8b0-4b44-b61a-0e04200083f0","Type":"ContainerDied","Data":"fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7"} Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.075238 4792 scope.go:117] "RemoveContainer" containerID="fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.075837 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.075905 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.075928 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.075956 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.075977 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:47Z","lastTransitionTime":"2025-11-27T17:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.101930 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.117009 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.127390 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.152419 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.165235 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.174227 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.178906 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.178940 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.178953 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.178969 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.178985 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:47Z","lastTransitionTime":"2025-11-27T17:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.184634 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.197731 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.211105 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:46Z\\\",\\\"message\\\":\\\"2025-11-27T17:10:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b\\\\n2025-11-27T17:10:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b to /host/opt/cni/bin/\\\\n2025-11-27T17:10:01Z [verbose] multus-daemon started\\\\n2025-11-27T17:10:01Z [verbose] Readiness Indicator file check\\\\n2025-11-27T17:10:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.230868 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"35.595901 6474 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1127 17:10:35.595928 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z]\\\\nI1127 17:10:35.595939 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-56bcx\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-gbrqr\\\\nI1127 17\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.241448 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.253735 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.264455 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.274913 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8c1f3-5285-46c6-9ecf-2a9a8d8b913b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37142b60c3f7602dce42d9af68452758dc9f8285471d39bcd317d1fc6385d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4883cae4263c4d72157da41d2859a7beef96d5ddc0f8f22ec70818cd82dbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5269565766d88e342561fc7ff70f8cf278e2acafacb637f8009a114a88e4a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.281211 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.281251 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.281260 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.281275 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.281285 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:47Z","lastTransitionTime":"2025-11-27T17:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.290772 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.299630 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.312340 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.327620 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:47Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.383485 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.383510 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.383518 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.383530 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.383539 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:47Z","lastTransitionTime":"2025-11-27T17:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.486934 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.487016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.487038 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.487068 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.487092 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:47Z","lastTransitionTime":"2025-11-27T17:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.589960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.590010 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.590023 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.590042 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.590056 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:47Z","lastTransitionTime":"2025-11-27T17:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.686075 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.686169 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:47 crc kubenswrapper[4792]: E1127 17:10:47.686231 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:47 crc kubenswrapper[4792]: E1127 17:10:47.686466 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.687166 4792 scope.go:117] "RemoveContainer" containerID="222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696" Nov 27 17:10:47 crc kubenswrapper[4792]: E1127 17:10:47.687400 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.691960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.691996 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.692008 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.692025 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.692036 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:47Z","lastTransitionTime":"2025-11-27T17:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.794239 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.794273 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.794284 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.794301 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.794313 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:47Z","lastTransitionTime":"2025-11-27T17:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.897046 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.897106 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.897124 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.897148 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:47 crc kubenswrapper[4792]: I1127 17:10:47.897166 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:47Z","lastTransitionTime":"2025-11-27T17:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:47.999960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.000000 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.000011 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.000027 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.000038 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:48Z","lastTransitionTime":"2025-11-27T17:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.081374 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gbrqr_71907161-f8b0-4b44-b61a-0e04200083f0/kube-multus/0.log" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.081442 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gbrqr" event={"ID":"71907161-f8b0-4b44-b61a-0e04200083f0","Type":"ContainerStarted","Data":"0b5d62f1bb8549b4cab2f42f0a65d518448affc1b1a72191be9f84521bd247b9"} Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.096268 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.102445 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.102502 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.102520 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.102543 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.102561 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:48Z","lastTransitionTime":"2025-11-27T17:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.109313 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.124073 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.135162 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8c1f3-5285-46c6-9ecf-2a9a8d8b913b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37142b60c3f7602dce42d9af68452758dc9f8285471d39bcd317d1fc6385d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4883cae4263c4d72157da41d2859a7beef96d5ddc0f8f22ec70818cd82dbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5269565766d88e342561fc7ff70f8cf278e2acafacb637f8009a114a88e4a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.148632 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.159257 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.171884 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.192512 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.205392 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.205470 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.205514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.205526 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.205542 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.205554 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:48Z","lastTransitionTime":"2025-11-27T17:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.216089 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.231880 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.249917 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"35.595901 6474 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1127 17:10:35.595928 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z]\\\\nI1127 17:10:35.595939 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-56bcx\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-gbrqr\\\\nI1127 17\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.262630 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.275595 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.287217 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.299416 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.308071 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.308109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.308118 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.308134 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.308144 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:48Z","lastTransitionTime":"2025-11-27T17:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.310668 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.322804 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5d62f1bb8549b4cab2f42f0a65d518448affc1b1a72191be9f84521bd247b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:46Z\\\",\\\"message\\\":\\\"2025-11-27T17:10:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b\\\\n2025-11-27T17:10:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b to /host/opt/cni/bin/\\\\n2025-11-27T17:10:01Z [verbose] multus-daemon started\\\\n2025-11-27T17:10:01Z [verbose] Readiness Indicator file check\\\\n2025-11-27T17:10:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.410411 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.410462 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.410473 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.410489 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.410500 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:48Z","lastTransitionTime":"2025-11-27T17:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.512688 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.512737 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.512750 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.512765 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.512796 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:48Z","lastTransitionTime":"2025-11-27T17:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.614357 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.614391 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.614401 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.614415 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.614424 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:48Z","lastTransitionTime":"2025-11-27T17:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.686555 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.686680 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:48 crc kubenswrapper[4792]: E1127 17:10:48.686787 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:48 crc kubenswrapper[4792]: E1127 17:10:48.686878 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.703766 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.711954 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.716107 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.716142 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.716152 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.716170 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.716181 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:48Z","lastTransitionTime":"2025-11-27T17:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.721593 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.734278 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.751747 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.763809 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8c1f3-5285-46c6-9ecf-2a9a8d8b913b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37142b60c3f7602dce42d9af68452758dc9f8285471d39bcd317d1fc6385d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4883cae4263c4d72157da41d2859a7beef96d5ddc0f8f22ec70818cd82dbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5269565766d88e342561fc7ff70f8cf278e2acafacb637f8009a114a88e4a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.775826 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.797754 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.814108 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.818876 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.818925 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.818936 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.818950 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.818978 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:48Z","lastTransitionTime":"2025-11-27T17:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.829041 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.841814 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.851587 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.862498 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5d62f1bb8549b4cab2f42f0a65d518448affc1b1a72191be9f84521bd247b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:46Z\\\",\\\"message\\\":\\\"2025-11-27T17:10:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b\\\\n2025-11-27T17:10:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b to /host/opt/cni/bin/\\\\n2025-11-27T17:10:01Z [verbose] multus-daemon started\\\\n2025-11-27T17:10:01Z [verbose] Readiness Indicator file check\\\\n2025-11-27T17:10:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.881007 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"35.595901 6474 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1127 17:10:35.595928 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z]\\\\nI1127 17:10:35.595939 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-56bcx\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-gbrqr\\\\nI1127 17\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.892142 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.906054 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.917015 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.921248 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.921294 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.921310 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.921333 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.921351 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:48Z","lastTransitionTime":"2025-11-27T17:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:48 crc kubenswrapper[4792]: I1127 17:10:48.927858 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:48Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.022968 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.023003 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.023012 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.023024 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.023034 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:49Z","lastTransitionTime":"2025-11-27T17:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.125060 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.125373 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.125385 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.125400 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.125410 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:49Z","lastTransitionTime":"2025-11-27T17:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.227793 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.227830 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.227839 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.227854 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.227863 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:49Z","lastTransitionTime":"2025-11-27T17:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.330218 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.330253 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.330262 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.330274 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.330310 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:49Z","lastTransitionTime":"2025-11-27T17:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.433863 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.433929 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.433951 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.433981 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.434003 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:49Z","lastTransitionTime":"2025-11-27T17:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.536167 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.536209 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.536220 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.536234 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.536243 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:49Z","lastTransitionTime":"2025-11-27T17:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.639960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.639996 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.640008 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.640025 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.640035 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:49Z","lastTransitionTime":"2025-11-27T17:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.685709 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:49 crc kubenswrapper[4792]: E1127 17:10:49.685853 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.686055 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:49 crc kubenswrapper[4792]: E1127 17:10:49.686132 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.742354 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.742392 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.742400 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.742418 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.742435 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:49Z","lastTransitionTime":"2025-11-27T17:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.845268 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.845540 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.845631 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.845741 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.845866 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:49Z","lastTransitionTime":"2025-11-27T17:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.948093 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.948128 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.948140 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.948154 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:49 crc kubenswrapper[4792]: I1127 17:10:49.948165 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:49Z","lastTransitionTime":"2025-11-27T17:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.050768 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.050799 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.050807 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.050821 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.050830 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:50Z","lastTransitionTime":"2025-11-27T17:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.153155 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.153209 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.153222 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.153240 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.153253 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:50Z","lastTransitionTime":"2025-11-27T17:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.255627 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.255699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.255711 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.255726 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.255738 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:50Z","lastTransitionTime":"2025-11-27T17:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.358514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.358560 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.358569 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.358585 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.358597 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:50Z","lastTransitionTime":"2025-11-27T17:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.461873 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.461958 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.461981 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.462011 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.462032 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:50Z","lastTransitionTime":"2025-11-27T17:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.564622 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.564687 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.564698 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.564716 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.564726 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:50Z","lastTransitionTime":"2025-11-27T17:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.668055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.668101 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.668110 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.668126 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.668135 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:50Z","lastTransitionTime":"2025-11-27T17:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.685842 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.685842 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:50 crc kubenswrapper[4792]: E1127 17:10:50.686139 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:50 crc kubenswrapper[4792]: E1127 17:10:50.686444 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.770365 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.770669 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.770770 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.770872 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.770959 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:50Z","lastTransitionTime":"2025-11-27T17:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.873797 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.873860 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.873877 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.873902 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.873919 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:50Z","lastTransitionTime":"2025-11-27T17:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.976921 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.976969 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.976982 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.976998 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:50 crc kubenswrapper[4792]: I1127 17:10:50.977014 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:50Z","lastTransitionTime":"2025-11-27T17:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.078964 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.079014 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.079031 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.079053 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.079070 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:51Z","lastTransitionTime":"2025-11-27T17:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.182116 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.182219 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.182239 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.182263 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.182282 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:51Z","lastTransitionTime":"2025-11-27T17:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.285045 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.285101 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.285120 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.285144 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.285164 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:51Z","lastTransitionTime":"2025-11-27T17:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.388189 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.388237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.388249 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.388266 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.388279 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:51Z","lastTransitionTime":"2025-11-27T17:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.490263 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.490331 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.490384 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.490399 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.490408 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:51Z","lastTransitionTime":"2025-11-27T17:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.593511 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.593559 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.593571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.593586 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.593598 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:51Z","lastTransitionTime":"2025-11-27T17:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.686399 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.686399 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:51 crc kubenswrapper[4792]: E1127 17:10:51.686733 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:51 crc kubenswrapper[4792]: E1127 17:10:51.686844 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.696597 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.696694 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.696714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.696735 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.696749 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:51Z","lastTransitionTime":"2025-11-27T17:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.799994 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.800048 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.800061 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.800081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.800094 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:51Z","lastTransitionTime":"2025-11-27T17:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.902449 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.902489 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.902502 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.902519 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:51 crc kubenswrapper[4792]: I1127 17:10:51.902531 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:51Z","lastTransitionTime":"2025-11-27T17:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.004255 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.004312 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.004328 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.004350 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.004365 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:52Z","lastTransitionTime":"2025-11-27T17:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.106554 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.106601 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.106612 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.106630 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.106796 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:52Z","lastTransitionTime":"2025-11-27T17:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.209694 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.210112 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.210203 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.210284 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.210384 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:52Z","lastTransitionTime":"2025-11-27T17:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.312667 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.312762 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.312771 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.312784 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.312795 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:52Z","lastTransitionTime":"2025-11-27T17:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.415872 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.415922 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.415934 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.415952 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.415965 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:52Z","lastTransitionTime":"2025-11-27T17:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.518496 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.518535 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.518546 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.518564 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.518575 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:52Z","lastTransitionTime":"2025-11-27T17:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.621358 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.621768 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.621937 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.622058 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.622240 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:52Z","lastTransitionTime":"2025-11-27T17:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.686175 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.686180 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:52 crc kubenswrapper[4792]: E1127 17:10:52.686865 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:52 crc kubenswrapper[4792]: E1127 17:10:52.687113 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.724947 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.724988 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.725000 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.725017 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.725028 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:52Z","lastTransitionTime":"2025-11-27T17:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.828182 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.828509 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.828732 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.828884 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.829042 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:52Z","lastTransitionTime":"2025-11-27T17:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.931471 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.931535 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.931554 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.931578 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:52 crc kubenswrapper[4792]: I1127 17:10:52.931595 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:52Z","lastTransitionTime":"2025-11-27T17:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.034740 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.035077 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.035234 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.035387 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.035541 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:53Z","lastTransitionTime":"2025-11-27T17:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.137937 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.138016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.138043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.138082 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.138106 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:53Z","lastTransitionTime":"2025-11-27T17:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.240757 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.241009 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.241075 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.241196 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.241277 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:53Z","lastTransitionTime":"2025-11-27T17:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.344392 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.344449 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.344466 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.344491 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.344507 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:53Z","lastTransitionTime":"2025-11-27T17:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.447355 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.447452 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.447467 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.447487 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.447502 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:53Z","lastTransitionTime":"2025-11-27T17:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.549714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.549767 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.549779 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.549799 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.549808 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:53Z","lastTransitionTime":"2025-11-27T17:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.651984 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.652060 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.652076 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.652100 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.652114 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:53Z","lastTransitionTime":"2025-11-27T17:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.686559 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.686721 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:53 crc kubenswrapper[4792]: E1127 17:10:53.686977 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:53 crc kubenswrapper[4792]: E1127 17:10:53.687118 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.754247 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.754313 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.754329 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.754351 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.754366 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:53Z","lastTransitionTime":"2025-11-27T17:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.856510 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.856720 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.856735 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.856751 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.856763 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:53Z","lastTransitionTime":"2025-11-27T17:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.959689 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.959728 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.959739 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.959790 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:53 crc kubenswrapper[4792]: I1127 17:10:53.959803 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:53Z","lastTransitionTime":"2025-11-27T17:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.062545 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.062614 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.062638 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.062695 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.062716 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:54Z","lastTransitionTime":"2025-11-27T17:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.165934 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.166000 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.166018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.166042 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.166064 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:54Z","lastTransitionTime":"2025-11-27T17:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.269043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.269096 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.269104 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.269118 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.269127 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:54Z","lastTransitionTime":"2025-11-27T17:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.371977 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.372016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.372027 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.372043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.372054 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:54Z","lastTransitionTime":"2025-11-27T17:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.474631 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.474680 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.474689 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.474701 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.474709 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:54Z","lastTransitionTime":"2025-11-27T17:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.578132 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.578170 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.578178 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.578195 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.578206 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:54Z","lastTransitionTime":"2025-11-27T17:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.682067 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.682146 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.682170 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.682204 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.682228 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:54Z","lastTransitionTime":"2025-11-27T17:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.686323 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.686332 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:54 crc kubenswrapper[4792]: E1127 17:10:54.686501 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:54 crc kubenswrapper[4792]: E1127 17:10:54.686734 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.784735 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.784797 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.784813 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.784857 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.784877 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:54Z","lastTransitionTime":"2025-11-27T17:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.886922 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.886971 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.886982 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.886996 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.887005 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:54Z","lastTransitionTime":"2025-11-27T17:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.990330 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.990380 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.990394 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.990415 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:54 crc kubenswrapper[4792]: I1127 17:10:54.990429 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:54Z","lastTransitionTime":"2025-11-27T17:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.093186 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.093260 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.093276 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.093295 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.093308 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:55Z","lastTransitionTime":"2025-11-27T17:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.195879 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.195934 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.195951 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.195973 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.195990 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:55Z","lastTransitionTime":"2025-11-27T17:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.299003 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.299066 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.299075 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.299088 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.299098 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:55Z","lastTransitionTime":"2025-11-27T17:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.401843 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.401907 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.401924 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.401948 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.401966 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:55Z","lastTransitionTime":"2025-11-27T17:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.504307 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.504379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.504398 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.504422 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.504486 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:55Z","lastTransitionTime":"2025-11-27T17:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.600785 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.600856 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.600869 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.600885 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.600899 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:55Z","lastTransitionTime":"2025-11-27T17:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:55 crc kubenswrapper[4792]: E1127 17:10:55.659138 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:55Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.662186 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.662211 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.662219 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.662231 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.662241 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:55Z","lastTransitionTime":"2025-11-27T17:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:55 crc kubenswrapper[4792]: E1127 17:10:55.672069 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:55Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.674890 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.674948 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.674969 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.674991 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.675006 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:55Z","lastTransitionTime":"2025-11-27T17:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.686059 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:55 crc kubenswrapper[4792]: E1127 17:10:55.686164 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.686337 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:55 crc kubenswrapper[4792]: E1127 17:10:55.686218 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:55Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:55 crc kubenswrapper[4792]: E1127 17:10:55.686402 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.689638 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.689695 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.689706 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.689724 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.689735 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:55Z","lastTransitionTime":"2025-11-27T17:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:55 crc kubenswrapper[4792]: E1127 17:10:55.700476 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:55Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.703902 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.703981 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.704001 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.704018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.704030 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:55Z","lastTransitionTime":"2025-11-27T17:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:55 crc kubenswrapper[4792]: E1127 17:10:55.721207 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:55Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:55 crc kubenswrapper[4792]: E1127 17:10:55.721370 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.723037 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.723084 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.723101 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.723122 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.723138 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:55Z","lastTransitionTime":"2025-11-27T17:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.825813 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.825887 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.825911 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.825996 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.826029 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:55Z","lastTransitionTime":"2025-11-27T17:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.929245 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.929300 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.929335 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.929365 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:55 crc kubenswrapper[4792]: I1127 17:10:55.929387 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:55Z","lastTransitionTime":"2025-11-27T17:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.032234 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.032282 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.032292 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.032308 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.032317 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:56Z","lastTransitionTime":"2025-11-27T17:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.135029 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.135065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.135076 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.135089 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.135098 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:56Z","lastTransitionTime":"2025-11-27T17:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.237275 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.237323 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.237334 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.237347 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.237356 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:56Z","lastTransitionTime":"2025-11-27T17:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.339578 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.339613 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.339638 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.339672 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.339680 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:56Z","lastTransitionTime":"2025-11-27T17:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.442926 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.443021 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.443055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.443087 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.443108 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:56Z","lastTransitionTime":"2025-11-27T17:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.545380 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.545456 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.545483 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.545514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.545537 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:56Z","lastTransitionTime":"2025-11-27T17:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.648215 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.648255 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.648265 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.648279 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.648287 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:56Z","lastTransitionTime":"2025-11-27T17:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.686636 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.686768 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:56 crc kubenswrapper[4792]: E1127 17:10:56.686892 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:56 crc kubenswrapper[4792]: E1127 17:10:56.687070 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.750535 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.750612 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.750634 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.750703 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.750727 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:56Z","lastTransitionTime":"2025-11-27T17:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.854343 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.854430 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.854455 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.854486 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.854510 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:56Z","lastTransitionTime":"2025-11-27T17:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.957971 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.958032 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.958048 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.958072 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:56 crc kubenswrapper[4792]: I1127 17:10:56.958091 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:56Z","lastTransitionTime":"2025-11-27T17:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.060096 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.060135 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.060147 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.060165 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.060177 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:57Z","lastTransitionTime":"2025-11-27T17:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.162628 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.162781 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.162804 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.162826 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.162842 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:57Z","lastTransitionTime":"2025-11-27T17:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.265755 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.265818 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.265837 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.265861 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.265880 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:57Z","lastTransitionTime":"2025-11-27T17:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.369214 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.369283 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.369300 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.369325 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.369349 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:57Z","lastTransitionTime":"2025-11-27T17:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.472713 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.472780 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.472800 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.472826 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.472848 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:57Z","lastTransitionTime":"2025-11-27T17:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.575681 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.575755 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.575773 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.575799 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.575816 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:57Z","lastTransitionTime":"2025-11-27T17:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.678293 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.678351 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.678366 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.678384 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.678399 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:57Z","lastTransitionTime":"2025-11-27T17:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.686669 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.686711 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:57 crc kubenswrapper[4792]: E1127 17:10:57.686856 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:57 crc kubenswrapper[4792]: E1127 17:10:57.686962 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.782299 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.782412 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.782436 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.782466 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.782485 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:57Z","lastTransitionTime":"2025-11-27T17:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.885399 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.885436 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.885446 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.885463 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.885474 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:57Z","lastTransitionTime":"2025-11-27T17:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.989171 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.989256 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.989271 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.989291 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:57 crc kubenswrapper[4792]: I1127 17:10:57.989305 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:57Z","lastTransitionTime":"2025-11-27T17:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.091729 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.091765 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.091776 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.091812 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.091826 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:58Z","lastTransitionTime":"2025-11-27T17:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.194045 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.194088 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.194096 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.194111 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.194119 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:58Z","lastTransitionTime":"2025-11-27T17:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.296194 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.296262 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.296276 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.296353 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.296368 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:58Z","lastTransitionTime":"2025-11-27T17:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.398849 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.398906 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.398921 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.398944 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.398957 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:58Z","lastTransitionTime":"2025-11-27T17:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.502400 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.502442 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.502451 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.502466 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.502475 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:58Z","lastTransitionTime":"2025-11-27T17:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.604791 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.604868 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.604890 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.604918 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.604939 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:58Z","lastTransitionTime":"2025-11-27T17:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.686343 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.686382 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:10:58 crc kubenswrapper[4792]: E1127 17:10:58.686589 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:10:58 crc kubenswrapper[4792]: E1127 17:10:58.686765 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.707821 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.707871 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.707885 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.707912 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.707933 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:58Z","lastTransitionTime":"2025-11-27T17:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.709434 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.727629 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.740815 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.755723 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.768544 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.781963 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5d62f1bb8549b4cab2f42f0a65d518448affc1b1a72191be9f84521bd247b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:46Z\\\",\\\"message\\\":\\\"2025-11-27T17:10:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b\\\\n2025-11-27T17:10:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b to /host/opt/cni/bin/\\\\n2025-11-27T17:10:01Z [verbose] multus-daemon started\\\\n2025-11-27T17:10:01Z [verbose] Readiness Indicator file check\\\\n2025-11-27T17:10:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.800972 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"35.595901 6474 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1127 17:10:35.595928 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z]\\\\nI1127 17:10:35.595939 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-56bcx\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-gbrqr\\\\nI1127 17\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.811519 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.811575 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.811589 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.811612 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.811628 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:58Z","lastTransitionTime":"2025-11-27T17:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.817560 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.835034 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.853204 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.868722 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8c1f3-5285-46c6-9ecf-2a9a8d8b913b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37142b60c3f7602dce42d9af68452758dc9f8285471d39bcd317d1fc6385d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4883cae4263c4d72157da41d2859a7beef96d5ddc0f8f22ec70818cd82dbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5269565766d88e342561fc7ff70f8cf278e2acafacb637f8009a114a88e4a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.883808 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.893528 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.905746 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.914401 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.914447 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.914460 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.914477 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.914490 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:58Z","lastTransitionTime":"2025-11-27T17:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.963812 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.981735 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:58 crc kubenswrapper[4792]: I1127 17:10:58.996058 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:58Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.006440 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:59Z is after 2025-08-24T17:21:41Z" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.017345 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.017415 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.017430 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.017449 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.017461 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:59Z","lastTransitionTime":"2025-11-27T17:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.119935 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.119995 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.120018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.120048 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.120069 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:59Z","lastTransitionTime":"2025-11-27T17:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.227562 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.228038 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.228094 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.228125 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.228160 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:59Z","lastTransitionTime":"2025-11-27T17:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.331425 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.331482 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.331498 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.331517 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.331533 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:59Z","lastTransitionTime":"2025-11-27T17:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.434757 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.434832 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.434860 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.434891 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.434915 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:59Z","lastTransitionTime":"2025-11-27T17:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.537830 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.537890 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.537908 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.537931 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.537948 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:59Z","lastTransitionTime":"2025-11-27T17:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.640533 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.640589 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.640608 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.640632 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.640681 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:59Z","lastTransitionTime":"2025-11-27T17:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.685896 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:10:59 crc kubenswrapper[4792]: E1127 17:10:59.686116 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.686464 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:10:59 crc kubenswrapper[4792]: E1127 17:10:59.687016 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.687837 4792 scope.go:117] "RemoveContainer" containerID="222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.744135 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.744178 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.744192 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.744210 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.744221 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:59Z","lastTransitionTime":"2025-11-27T17:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.847334 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.847373 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.847382 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.847396 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.847406 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:59Z","lastTransitionTime":"2025-11-27T17:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.949719 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.949762 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.949773 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.949789 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:10:59 crc kubenswrapper[4792]: I1127 17:10:59.949798 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:10:59Z","lastTransitionTime":"2025-11-27T17:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.052291 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.052332 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.052341 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.052356 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.052367 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:00Z","lastTransitionTime":"2025-11-27T17:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.118295 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovnkube-controller/2.log" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.120385 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerStarted","Data":"40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b"} Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.120830 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.138191 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.151104 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.154557 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.154584 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.154593 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.154606 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.154614 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:00Z","lastTransitionTime":"2025-11-27T17:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.164006 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.180078 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.196778 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.212681 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.223386 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.234867 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.247519 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.258012 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.258055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.258065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.258084 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.258094 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:00Z","lastTransitionTime":"2025-11-27T17:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.263659 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5d62f1bb8549b4cab2f42f0a65d518448affc1b1a72191be9f84521bd247b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:46Z\\\",\\\"message\\\":\\\"2025-11-27T17:10:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b\\\\n2025-11-27T17:10:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b to /host/opt/cni/bin/\\\\n2025-11-27T17:10:01Z [verbose] multus-daemon started\\\\n2025-11-27T17:10:01Z [verbose] Readiness Indicator file check\\\\n2025-11-27T17:10:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.292491 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"35.595901 6474 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1127 17:10:35.595928 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z]\\\\nI1127 17:10:35.595939 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-56bcx\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-gbrqr\\\\nI1127 17\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.305151 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.327138 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.341021 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.358853 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8c1f3-5285-46c6-9ecf-2a9a8d8b913b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37142b60c3f7602dce42d9af68452758dc9f8285471d39bcd317d1fc6385d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4883cae4263c4d72157da41d2859a7beef96d5ddc0f8f22ec70818cd82dbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5269565766d88e342561fc7ff70f8cf278e2acafacb637f8009a114a88e4a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.360773 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.360808 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.360818 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.360833 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.360848 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:00Z","lastTransitionTime":"2025-11-27T17:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.374466 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.387438 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.402635 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.464175 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.464218 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.464230 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.464247 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.464263 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:00Z","lastTransitionTime":"2025-11-27T17:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.567836 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.567907 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.567927 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.567951 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.567973 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:00Z","lastTransitionTime":"2025-11-27T17:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.671734 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.671776 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.671786 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.671808 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.671820 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:00Z","lastTransitionTime":"2025-11-27T17:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.686307 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.686372 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:00 crc kubenswrapper[4792]: E1127 17:11:00.686592 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:00 crc kubenswrapper[4792]: E1127 17:11:00.686774 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.774203 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.774256 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.774266 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.774285 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.774300 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:00Z","lastTransitionTime":"2025-11-27T17:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.877020 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.877057 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.877066 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.877079 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.877088 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:00Z","lastTransitionTime":"2025-11-27T17:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.979816 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.979868 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.979880 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.979900 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:00 crc kubenswrapper[4792]: I1127 17:11:00.979914 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:00Z","lastTransitionTime":"2025-11-27T17:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.082510 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.082543 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.082551 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.082564 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.082573 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:01Z","lastTransitionTime":"2025-11-27T17:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.124946 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovnkube-controller/3.log" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.125568 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovnkube-controller/2.log" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.127854 4792 generic.go:334] "Generic (PLEG): container finished" podID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerID="40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b" exitCode=1 Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.127895 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerDied","Data":"40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b"} Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.127938 4792 scope.go:117] "RemoveContainer" containerID="222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.128888 4792 scope.go:117] "RemoveContainer" containerID="40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b" Nov 27 17:11:01 crc kubenswrapper[4792]: E1127 17:11:01.129111 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.142886 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.159615 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.170337 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.181359 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.186274 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.186314 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.186324 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.186340 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.186352 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:01Z","lastTransitionTime":"2025-11-27T17:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.194974 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.209617 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.224226 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8c1f3-5285-46c6-9ecf-2a9a8d8b913b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37142b60c3f7602dce42d9af68452758dc9f8285471d39bcd317d1fc6385d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4883cae4263c4d72157da41d2859a7beef96d5ddc0f8f22ec70818cd82dbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5269565766d88e342561fc7ff70f8cf278e2acafacb637f8009a114a88e4a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.236704 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.262804 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.280818 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.288709 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.288751 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.288764 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.288782 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.288793 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:01Z","lastTransitionTime":"2025-11-27T17:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.300383 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.315714 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.328454 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.342691 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5d62f1bb8549b4cab2f42f0a65d518448affc1b1a72191be9f84521bd247b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:46Z\\\",\\\"message\\\":\\\"2025-11-27T17:10:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b\\\\n2025-11-27T17:10:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b to /host/opt/cni/bin/\\\\n2025-11-27T17:10:01Z [verbose] multus-daemon started\\\\n2025-11-27T17:10:01Z [verbose] Readiness Indicator file check\\\\n2025-11-27T17:10:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.369870 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://222595f287c4bdab452b674e7ab262557e1dd165199118bb30da576bb32bd696\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:35Z\\\",\\\"message\\\":\\\"35.595901 6474 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1127 17:10:35.595928 6474 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:10:35Z is after 2025-08-24T17:21:41Z]\\\\nI1127 17:10:35.595939 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-56bcx\\\\nI1127 17:10:35.595953 6474 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-gbrqr\\\\nI1127 17\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:11:00Z\\\",\\\"message\\\":\\\"cd-operator for network=default : 3.082633ms\\\\nI1127 17:11:00.477827 6825 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1127 17:11:00.477851 6825 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z]\\\\nI1127 17:11:00.477003 6825 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-56bcx after 0 failed attempt(s)\\\\nI1127 17:11:00.477866 6825 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-56bcx\\\\nI1127 17\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.385931 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.390999 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.391050 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.391065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.391085 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.391099 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:01Z","lastTransitionTime":"2025-11-27T17:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.399596 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.414239 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:01Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.493997 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.494052 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.494064 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.494081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.494095 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:01Z","lastTransitionTime":"2025-11-27T17:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.596809 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.596861 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.596879 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.596904 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.596924 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:01Z","lastTransitionTime":"2025-11-27T17:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.686411 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.686517 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:01 crc kubenswrapper[4792]: E1127 17:11:01.686729 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:01 crc kubenswrapper[4792]: E1127 17:11:01.686894 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.699555 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.699699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.699810 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.699880 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.699943 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:01Z","lastTransitionTime":"2025-11-27T17:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.802254 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.802306 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.802318 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.802338 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.802352 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:01Z","lastTransitionTime":"2025-11-27T17:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.904691 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.904754 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.904777 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.904810 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:01 crc kubenswrapper[4792]: I1127 17:11:01.904834 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:01Z","lastTransitionTime":"2025-11-27T17:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.008150 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.008205 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.008222 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.008250 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.008268 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:02Z","lastTransitionTime":"2025-11-27T17:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.111067 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.111159 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.111197 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.111228 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.111253 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:02Z","lastTransitionTime":"2025-11-27T17:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.133352 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovnkube-controller/3.log" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.138038 4792 scope.go:117] "RemoveContainer" containerID="40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b" Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.138452 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.159034 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.179583 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.192004 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.205820 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.216184 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.216281 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.216308 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.217234 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.217276 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:02Z","lastTransitionTime":"2025-11-27T17:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.231121 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.246042 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.259246 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8c1f3-5285-46c6-9ecf-2a9a8d8b913b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37142b60c3f7602dce42d9af68452758dc9f8285471d39bcd317d1fc6385d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4883cae4263c4d72157da41d2859a7beef96d5ddc0f8f22ec70818cd82dbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5269565766d88e342561fc7ff70f8cf278e2acafacb637f8009a114a88e4a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.274189 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.293399 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.306397 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.318475 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.320445 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.320549 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.320570 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.320594 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.320614 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:02Z","lastTransitionTime":"2025-11-27T17:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.330740 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.343700 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.354796 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5d62f1bb8549b4cab2f42f0a65d518448affc1b1a72191be9f84521bd247b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:46Z\\\",\\\"message\\\":\\\"2025-11-27T17:10:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b\\\\n2025-11-27T17:10:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b to /host/opt/cni/bin/\\\\n2025-11-27T17:10:01Z [verbose] multus-daemon started\\\\n2025-11-27T17:10:01Z [verbose] Readiness Indicator file check\\\\n2025-11-27T17:10:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.372688 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:11:00Z\\\",\\\"message\\\":\\\"cd-operator for network=default : 3.082633ms\\\\nI1127 17:11:00.477827 6825 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1127 17:11:00.477851 6825 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z]\\\\nI1127 17:11:00.477003 6825 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-56bcx after 0 failed attempt(s)\\\\nI1127 17:11:00.477866 6825 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-56bcx\\\\nI1127 17\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.386322 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.399457 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.411600 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:02Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.425113 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.425284 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.425389 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.425493 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.425587 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:02Z","lastTransitionTime":"2025-11-27T17:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.518581 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.518885 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.518843375 +0000 UTC m=+148.861669723 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.528517 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.528556 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.528569 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.528589 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.528603 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:02Z","lastTransitionTime":"2025-11-27T17:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.621424 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.621537 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.621609 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.621707 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.621825 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.621862 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.621882 4792 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.621881 4792 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.621879 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.621992 4792 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.621955 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.621933166 +0000 UTC m=+148.964759514 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.622017 4792 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.622156 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.622060129 +0000 UTC m=+148.964886437 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.622219 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.622191262 +0000 UTC m=+148.965017790 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.622915 4792 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.623059 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.623029381 +0000 UTC m=+148.965855739 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.630918 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.630962 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.630979 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.631004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.631020 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:02Z","lastTransitionTime":"2025-11-27T17:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.686296 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.686466 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.686802 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:02 crc kubenswrapper[4792]: E1127 17:11:02.687125 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.734878 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.734932 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.734950 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.734978 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.734999 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:02Z","lastTransitionTime":"2025-11-27T17:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.837607 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.837712 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.837733 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.837767 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.837788 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:02Z","lastTransitionTime":"2025-11-27T17:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.941596 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.941697 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.941716 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.941743 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:02 crc kubenswrapper[4792]: I1127 17:11:02.941761 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:02Z","lastTransitionTime":"2025-11-27T17:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.046213 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.046262 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.046274 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.046294 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.046308 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:03Z","lastTransitionTime":"2025-11-27T17:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.148774 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.148807 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.148816 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.148835 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.148846 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:03Z","lastTransitionTime":"2025-11-27T17:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.230392 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs\") pod \"network-metrics-daemon-5qmhg\" (UID: \"2ec75c0b-1943-49d4-8813-bf8cc5218511\") " pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:03 crc kubenswrapper[4792]: E1127 17:11:03.230614 4792 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:11:03 crc kubenswrapper[4792]: E1127 17:11:03.231159 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs podName:2ec75c0b-1943-49d4-8813-bf8cc5218511 nodeName:}" failed. No retries permitted until 2025-11-27 17:12:07.231129168 +0000 UTC m=+149.573955486 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs") pod "network-metrics-daemon-5qmhg" (UID: "2ec75c0b-1943-49d4-8813-bf8cc5218511") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.252852 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.252919 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.252931 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.252952 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.252967 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:03Z","lastTransitionTime":"2025-11-27T17:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.355583 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.355633 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.355660 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.355683 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.355697 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:03Z","lastTransitionTime":"2025-11-27T17:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.458979 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.459039 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.459053 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.459076 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.459087 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:03Z","lastTransitionTime":"2025-11-27T17:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.562966 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.563033 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.563054 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.563090 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.563110 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:03Z","lastTransitionTime":"2025-11-27T17:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.666987 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.667081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.667109 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.667144 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.667169 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:03Z","lastTransitionTime":"2025-11-27T17:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.686026 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.686025 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:03 crc kubenswrapper[4792]: E1127 17:11:03.686411 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:03 crc kubenswrapper[4792]: E1127 17:11:03.686605 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.769496 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.769549 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.769560 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.769578 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.769592 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:03Z","lastTransitionTime":"2025-11-27T17:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.874165 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.874246 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.874267 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.874300 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.874323 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:03Z","lastTransitionTime":"2025-11-27T17:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.977263 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.977330 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.977345 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.977368 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:03 crc kubenswrapper[4792]: I1127 17:11:03.977381 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:03Z","lastTransitionTime":"2025-11-27T17:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.081319 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.081378 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.081415 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.081433 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.081444 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:04Z","lastTransitionTime":"2025-11-27T17:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.184878 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.184976 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.184995 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.185017 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.185035 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:04Z","lastTransitionTime":"2025-11-27T17:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.288488 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.288565 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.288589 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.288619 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.288648 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:04Z","lastTransitionTime":"2025-11-27T17:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.390964 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.391017 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.391034 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.391070 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.391088 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:04Z","lastTransitionTime":"2025-11-27T17:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.494410 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.494451 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.494479 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.494492 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.494501 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:04Z","lastTransitionTime":"2025-11-27T17:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.597963 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.598033 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.598051 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.598074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.598090 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:04Z","lastTransitionTime":"2025-11-27T17:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.686957 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.686977 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:04 crc kubenswrapper[4792]: E1127 17:11:04.687225 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:04 crc kubenswrapper[4792]: E1127 17:11:04.687452 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.700871 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.700925 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.700946 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.700970 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.700989 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:04Z","lastTransitionTime":"2025-11-27T17:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.803210 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.803270 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.803285 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.803301 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.803311 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:04Z","lastTransitionTime":"2025-11-27T17:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.907172 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.907237 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.907250 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.907271 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:04 crc kubenswrapper[4792]: I1127 17:11:04.907285 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:04Z","lastTransitionTime":"2025-11-27T17:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.009997 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.010100 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.010121 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.010160 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.010182 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:05Z","lastTransitionTime":"2025-11-27T17:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.113569 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.113641 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.113698 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.113718 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.113729 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:05Z","lastTransitionTime":"2025-11-27T17:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.216977 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.217060 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.217085 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.217121 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.217161 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:05Z","lastTransitionTime":"2025-11-27T17:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.321379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.321490 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.321521 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.321559 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.321580 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:05Z","lastTransitionTime":"2025-11-27T17:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.424917 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.424965 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.424977 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.424992 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.425005 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:05Z","lastTransitionTime":"2025-11-27T17:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.529487 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.529557 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.529570 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.529588 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.529600 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:05Z","lastTransitionTime":"2025-11-27T17:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.632686 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.632758 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.632813 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.632842 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.632860 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:05Z","lastTransitionTime":"2025-11-27T17:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.686773 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:05 crc kubenswrapper[4792]: E1127 17:11:05.686876 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.686936 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:05 crc kubenswrapper[4792]: E1127 17:11:05.687118 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.700065 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.735752 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.735799 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.735810 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.735826 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.735839 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:05Z","lastTransitionTime":"2025-11-27T17:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.838587 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.838649 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.838699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.838723 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.838781 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:05Z","lastTransitionTime":"2025-11-27T17:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.927371 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.927441 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.927459 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.927484 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.927500 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:05Z","lastTransitionTime":"2025-11-27T17:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:05 crc kubenswrapper[4792]: E1127 17:11:05.945992 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.950559 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.950684 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.950705 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.950731 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.950748 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:05Z","lastTransitionTime":"2025-11-27T17:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:05 crc kubenswrapper[4792]: E1127 17:11:05.974898 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.978820 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.978854 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.978864 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.978881 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.978895 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:05Z","lastTransitionTime":"2025-11-27T17:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:05 crc kubenswrapper[4792]: E1127 17:11:05.994492 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:05Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.998507 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.998734 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.998863 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.999182 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:05 crc kubenswrapper[4792]: I1127 17:11:05.999345 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:05Z","lastTransitionTime":"2025-11-27T17:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:06 crc kubenswrapper[4792]: E1127 17:11:06.020630 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.026395 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.026462 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.026484 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.026513 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.026538 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:06Z","lastTransitionTime":"2025-11-27T17:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:06 crc kubenswrapper[4792]: E1127 17:11:06.046860 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:06Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:06 crc kubenswrapper[4792]: E1127 17:11:06.047020 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.048847 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.048930 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.048954 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.048983 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.049008 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:06Z","lastTransitionTime":"2025-11-27T17:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.151395 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.151454 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.151476 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.151499 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.151516 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:06Z","lastTransitionTime":"2025-11-27T17:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.255219 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.255303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.255330 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.255363 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.255385 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:06Z","lastTransitionTime":"2025-11-27T17:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.357933 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.358008 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.358030 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.358059 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.358081 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:06Z","lastTransitionTime":"2025-11-27T17:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.460926 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.460987 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.461004 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.461020 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.461031 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:06Z","lastTransitionTime":"2025-11-27T17:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.563904 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.563985 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.564005 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.564032 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.564050 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:06Z","lastTransitionTime":"2025-11-27T17:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.666742 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.666832 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.666853 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.666876 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.666892 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:06Z","lastTransitionTime":"2025-11-27T17:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.685794 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.685796 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:06 crc kubenswrapper[4792]: E1127 17:11:06.685951 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:06 crc kubenswrapper[4792]: E1127 17:11:06.686158 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.769524 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.769601 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.769628 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.769692 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.769719 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:06Z","lastTransitionTime":"2025-11-27T17:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.873002 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.873128 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.873164 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.873192 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.873212 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:06Z","lastTransitionTime":"2025-11-27T17:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.976671 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.976756 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.976777 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.976802 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:06 crc kubenswrapper[4792]: I1127 17:11:06.976821 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:06Z","lastTransitionTime":"2025-11-27T17:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.079474 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.079550 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.079560 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.079577 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.079587 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:07Z","lastTransitionTime":"2025-11-27T17:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.181758 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.181803 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.181814 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.181829 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.181838 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:07Z","lastTransitionTime":"2025-11-27T17:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.284511 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.284564 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.284575 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.284587 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.284595 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:07Z","lastTransitionTime":"2025-11-27T17:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.387248 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.387661 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.387780 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.387881 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.387971 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:07Z","lastTransitionTime":"2025-11-27T17:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.490839 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.490884 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.490899 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.490922 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.490936 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:07Z","lastTransitionTime":"2025-11-27T17:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.593223 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.593262 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.593273 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.593289 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.593300 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:07Z","lastTransitionTime":"2025-11-27T17:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.686288 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.686295 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:07 crc kubenswrapper[4792]: E1127 17:11:07.686515 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:07 crc kubenswrapper[4792]: E1127 17:11:07.686671 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.695407 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.695445 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.695456 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.695473 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.695485 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:07Z","lastTransitionTime":"2025-11-27T17:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.798571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.798627 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.798688 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.798714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.798731 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:07Z","lastTransitionTime":"2025-11-27T17:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.901902 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.901954 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.901971 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.901993 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:07 crc kubenswrapper[4792]: I1127 17:11:07.902011 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:07Z","lastTransitionTime":"2025-11-27T17:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.004554 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.004613 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.004636 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.004703 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.004729 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:08Z","lastTransitionTime":"2025-11-27T17:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.107260 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.107695 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.107897 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.108035 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.108175 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:08Z","lastTransitionTime":"2025-11-27T17:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.211043 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.211071 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.211081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.211096 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.211109 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:08Z","lastTransitionTime":"2025-11-27T17:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.313615 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.313933 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.314057 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.314257 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.314368 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:08Z","lastTransitionTime":"2025-11-27T17:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.417713 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.417784 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.417802 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.417823 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.417834 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:08Z","lastTransitionTime":"2025-11-27T17:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.521997 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.522053 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.522094 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.522128 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.522154 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:08Z","lastTransitionTime":"2025-11-27T17:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.624528 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.624610 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.624633 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.624706 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.624729 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:08Z","lastTransitionTime":"2025-11-27T17:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.686992 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.687174 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:08 crc kubenswrapper[4792]: E1127 17:11:08.687401 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:08 crc kubenswrapper[4792]: E1127 17:11:08.687571 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.713334 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"416187e8-58ba-45d1-972c-5c2fea1afd90\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-27T17:09:52Z\\\",\\\"message\\\":\\\"W1127 17:09:41.805495 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1127 17:09:41.805885 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764263381 cert, and key in /tmp/serving-cert-3799500501/serving-signer.crt, /tmp/serving-cert-3799500501/serving-signer.key\\\\nI1127 17:09:42.054343 1 observer_polling.go:159] Starting file observer\\\\nW1127 17:09:42.057754 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1127 17:09:42.057934 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1127 17:09:42.059554 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3799500501/tls.crt::/tmp/serving-cert-3799500501/tls.key\\\\\\\"\\\\nF1127 17:09:52.410022 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.729227 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.729288 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.729312 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.729342 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.729369 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:08Z","lastTransitionTime":"2025-11-27T17:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.736537 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e9bc6ea-f62b-4a03-8b70-62d149f804da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31c6c700db61fbd276ea743f71fbf8fde760eb693337312ee092c0e2f9afc689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f3c5d28badef463d7bd8926193bac38a5a1d1d4d50580d541e1cfb6ffbfa2d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf205e663f5b5fd161be409de22688d771aff92b9d2f42fd5c17cadc2fa68156\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.758581 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cd8c1f3-5285-46c6-9ecf-2a9a8d8b913b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37142b60c3f7602dce42d9af68452758dc9f8285471d39bcd317d1fc6385d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab4883cae4263c4d72157da41d2859a7beef96d5ddc0f8f22ec70818cd82dbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5269565766d88e342561fc7ff70f8cf278e2acafacb637f8009a114a88e4a59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c9b098749633e1f9296e30bf00e01c78672afcef09ac8b16638ae75c590d2e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.791767 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2dr66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99059038-38b6-4797-a8ba-be8bfaecfa8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7092884bdbdd98aeb27dc34099a27a1211c49d84efd8eb02ab3e6afbb500e5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a446307d41b0d48b88b161585338178ad60d7f744d2450011a5488eae46ef85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5446a9616f56b2293d8659cf7f19b746186bfcd59170f500fec57af209df6375\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db5a352f2cb363a032cf198153f1b9be47869960de702bbdea6df7af26694f64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4382de5a213532f47780098a0cc38a00710e463511660eb16aa64fa7a1763ba6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7e7a26172dbd1ca78c0f252e52300ddc0efaa934a63027735b81997e58c58aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa1681340ad5c0e436b6913a2f40c86bd7d8c94f67dab6dc565bc34e71e82390\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:10:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7nm5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2dr66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.806410 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-q5zkv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ecdda3f2-57fa-4cdf-9b2e-e148452fb25c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d34cb2f46b75a76ea695e00538d09002bf7ed2e3678e148bfa512a7775a7274\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwkqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-q5zkv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.828272 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e123df-81ab-4743-a865-515eadba43af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01dbeb64e22baf39a5c283e19fb2481f95ae77a0f88a98670ffff4fbec45e1b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e67e97336abb2963a9f1ca4182d8e81919c74a37dd0219ffb72256561c4710d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r94b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:10:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-slvg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.833993 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.835289 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.836124 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.836297 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.836426 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:08Z","lastTransitionTime":"2025-11-27T17:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.866578 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9c9bc7b-12c1-493b-9874-4d0e1025c533\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe2eaaf9c4da8d5262e8386c25a547a22b5ed88cb652b72938fb306b25a6758\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e5b00babbac6adb52f7c6e138947ab09e45c55e90fcb7d5a40dee2ad760c8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69391973da3d241910c219fbda0c3d34854acc3ab857ee4a4781a50b01f970e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0aca8e8883264a241437f4df3d92a1aa03dd1fd9197034afbe1aafa52f93b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://909987779c21446ea8e8e5a2783e3f7fab583a0af50163d378e2af7bdde59df6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f599354ed080dcc34df5700c44ec7b8d362795f2040f57b44300ccb45c26cc43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a35aaaeb0aef6f3d88e50bfbdda229742e56683715a4383809d83fe9f03707da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7620a6186c12d874e5b4f98e4c2d9a9ca5a078c4fbf89268001980956734fc4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.884553 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce927e04980eb2c8d19b892e52a984041f5bf5949511304924a7dac893361d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.904192 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6664175c3a44a0c50c00ad11b3b5a4588c25707604b2b6e0e18a2bb5acb6d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9814552036e845976b9e5c3229a3063ebe602d373c980b8a8a01ee98a355b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.921794 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ec75c0b-1943-49d4-8813-bf8cc5218511\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzmmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qmhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.939950 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.940016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.940037 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.940071 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.940093 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:08Z","lastTransitionTime":"2025-11-27T17:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.942629 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:08 crc kubenswrapper[4792]: I1127 17:11:08.957420 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27e295f4151aa6e53020abe1211fb907c84e846195645b90d3e2580a3047013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:08.969485 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-v6h2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9bfa510-5a70-4b77-a579-9907b15f8176\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b3bd294d5f17273410264f5995f0c4c86b9e63a43767ea2460526bad475815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xgdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-v6h2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:08Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.032222 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.042903 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.042966 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.042978 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.042995 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.043039 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:09Z","lastTransitionTime":"2025-11-27T17:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.047784 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04094924d1e9db7864b6e74e7db69c0dd249effc754a0dff486e992815e2f3ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zpkzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-56bcx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.062404 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gbrqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71907161-f8b0-4b44-b61a-0e04200083f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b5d62f1bb8549b4cab2f42f0a65d518448affc1b1a72191be9f84521bd247b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:10:46Z\\\",\\\"message\\\":\\\"2025-11-27T17:10:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b\\\\n2025-11-27T17:10:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c01461f5-b096-4b5b-8f30-5b4f2b5a477b to /host/opt/cni/bin/\\\\n2025-11-27T17:10:01Z [verbose] multus-daemon started\\\\n2025-11-27T17:10:01Z [verbose] Readiness Indicator file check\\\\n2025-11-27T17:10:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h4n44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gbrqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.089195 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5ee573-9a50-4d09-b129-fb461db20cf6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-27T17:11:00Z\\\",\\\"message\\\":\\\"cd-operator for network=default : 3.082633ms\\\\nI1127 17:11:00.477827 6825 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1127 17:11:00.477851 6825 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:00Z is after 2025-08-24T17:21:41Z]\\\\nI1127 17:11:00.477003 6825 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-56bcx after 0 failed attempt(s)\\\\nI1127 17:11:00.477866 6825 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-56bcx\\\\nI1127 17\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-27T17:10:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:10:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c892m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:59Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vkjf7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.102573 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56850c13-8c63-48e1-b550-3ba84d13e784\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb6802de491430032bdb89c4fa8cb01659a4b3ddba39610cab5770246880a2fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ed0f9e54c980f6d41eb66d9aff930cedaf77b0dbebc024424a1f4104d935fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ed0f9e54c980f6d41eb66d9aff930cedaf77b0dbebc024424a1f4104d935fb3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-27T17:09:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-27T17:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-27T17:09:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.117133 4792 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-27T17:09:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:09Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.146643 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.146746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.146764 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.146789 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.146810 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:09Z","lastTransitionTime":"2025-11-27T17:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.249853 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.250294 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.250481 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.250506 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.250524 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:09Z","lastTransitionTime":"2025-11-27T17:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.353305 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.353345 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.353358 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.353375 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.353384 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:09Z","lastTransitionTime":"2025-11-27T17:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.455703 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.455779 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.455802 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.455831 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.455850 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:09Z","lastTransitionTime":"2025-11-27T17:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.559124 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.559181 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.559192 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.559207 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.559217 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:09Z","lastTransitionTime":"2025-11-27T17:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.662429 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.662493 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.662514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.662543 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.662565 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:09Z","lastTransitionTime":"2025-11-27T17:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.686301 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.686338 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:09 crc kubenswrapper[4792]: E1127 17:11:09.686479 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:09 crc kubenswrapper[4792]: E1127 17:11:09.686567 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.765677 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.765719 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.765730 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.765746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.765758 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:09Z","lastTransitionTime":"2025-11-27T17:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.868813 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.869215 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.869351 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.869538 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.869738 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:09Z","lastTransitionTime":"2025-11-27T17:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.972499 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.972550 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.972562 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.972577 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:09 crc kubenswrapper[4792]: I1127 17:11:09.972591 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:09Z","lastTransitionTime":"2025-11-27T17:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.075518 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.076041 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.076196 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.076400 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.076555 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:10Z","lastTransitionTime":"2025-11-27T17:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.179490 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.179553 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.179565 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.179587 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.179606 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:10Z","lastTransitionTime":"2025-11-27T17:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.290719 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.290785 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.290805 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.290834 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.290859 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:10Z","lastTransitionTime":"2025-11-27T17:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.393376 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.393424 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.393433 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.393449 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.393458 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:10Z","lastTransitionTime":"2025-11-27T17:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.496496 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.496572 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.496595 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.496626 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.496648 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:10Z","lastTransitionTime":"2025-11-27T17:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.599868 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.599933 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.599947 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.599973 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.599986 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:10Z","lastTransitionTime":"2025-11-27T17:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.686826 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.686855 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:10 crc kubenswrapper[4792]: E1127 17:11:10.687067 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:10 crc kubenswrapper[4792]: E1127 17:11:10.687190 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.703073 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.703127 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.703139 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.703158 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.703172 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:10Z","lastTransitionTime":"2025-11-27T17:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.806231 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.806290 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.806302 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.806320 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.806331 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:10Z","lastTransitionTime":"2025-11-27T17:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.908628 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.908699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.908712 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.908729 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:10 crc kubenswrapper[4792]: I1127 17:11:10.908740 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:10Z","lastTransitionTime":"2025-11-27T17:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.011121 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.011184 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.011196 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.011215 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.011229 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:11Z","lastTransitionTime":"2025-11-27T17:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.114322 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.114399 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.114418 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.114441 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.114457 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:11Z","lastTransitionTime":"2025-11-27T17:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.216571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.216631 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.216668 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.216688 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.216701 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:11Z","lastTransitionTime":"2025-11-27T17:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.319768 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.319828 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.319843 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.319863 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.319877 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:11Z","lastTransitionTime":"2025-11-27T17:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.423236 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.423303 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.423318 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.423343 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.423360 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:11Z","lastTransitionTime":"2025-11-27T17:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.526201 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.526267 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.526279 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.526300 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.526311 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:11Z","lastTransitionTime":"2025-11-27T17:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.628573 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.628644 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.628698 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.628724 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.628742 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:11Z","lastTransitionTime":"2025-11-27T17:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.686414 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:11 crc kubenswrapper[4792]: E1127 17:11:11.686547 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.686747 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:11 crc kubenswrapper[4792]: E1127 17:11:11.686946 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.731387 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.731449 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.731466 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.731490 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.731510 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:11Z","lastTransitionTime":"2025-11-27T17:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.833685 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.833735 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.833746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.833762 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.833781 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:11Z","lastTransitionTime":"2025-11-27T17:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.941635 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.941726 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.941738 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.941754 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:11 crc kubenswrapper[4792]: I1127 17:11:11.941766 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:11Z","lastTransitionTime":"2025-11-27T17:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.044604 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.044667 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.044677 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.044690 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.044700 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:12Z","lastTransitionTime":"2025-11-27T17:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.147561 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.147599 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.147607 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.147623 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.147632 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:12Z","lastTransitionTime":"2025-11-27T17:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.249680 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.249714 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.249725 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.249739 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.249749 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:12Z","lastTransitionTime":"2025-11-27T17:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.352215 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.352240 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.352248 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.352260 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.352268 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:12Z","lastTransitionTime":"2025-11-27T17:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.455302 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.455348 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.455361 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.455379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.455391 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:12Z","lastTransitionTime":"2025-11-27T17:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.558554 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.558618 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.558636 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.558693 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.558712 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:12Z","lastTransitionTime":"2025-11-27T17:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.661701 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.661777 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.661789 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.661808 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.661825 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:12Z","lastTransitionTime":"2025-11-27T17:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.686340 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.686429 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:12 crc kubenswrapper[4792]: E1127 17:11:12.686603 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:12 crc kubenswrapper[4792]: E1127 17:11:12.686816 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.764596 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.764645 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.764674 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.764708 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.764725 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:12Z","lastTransitionTime":"2025-11-27T17:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.867042 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.867084 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.867098 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.867114 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.867128 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:12Z","lastTransitionTime":"2025-11-27T17:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.970724 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.970779 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.970797 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.970819 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:12 crc kubenswrapper[4792]: I1127 17:11:12.970837 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:12Z","lastTransitionTime":"2025-11-27T17:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.073712 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.073789 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.073802 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.073821 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.073834 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:13Z","lastTransitionTime":"2025-11-27T17:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.175531 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.175567 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.175578 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.175592 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.175602 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:13Z","lastTransitionTime":"2025-11-27T17:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.277994 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.278054 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.278073 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.278104 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.278121 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:13Z","lastTransitionTime":"2025-11-27T17:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.381010 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.381074 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.381089 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.381111 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.381127 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:13Z","lastTransitionTime":"2025-11-27T17:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.484293 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.484359 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.484368 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.484381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.484389 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:13Z","lastTransitionTime":"2025-11-27T17:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.586636 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.586693 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.586701 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.586715 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.586723 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:13Z","lastTransitionTime":"2025-11-27T17:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.685676 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.685758 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:13 crc kubenswrapper[4792]: E1127 17:11:13.685794 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:13 crc kubenswrapper[4792]: E1127 17:11:13.685921 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.689480 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.689515 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.689524 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.689537 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.689547 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:13Z","lastTransitionTime":"2025-11-27T17:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.792392 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.792435 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.792444 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.792462 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.792472 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:13Z","lastTransitionTime":"2025-11-27T17:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.894377 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.894414 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.894423 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.894440 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.894451 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:13Z","lastTransitionTime":"2025-11-27T17:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.996687 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.996726 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.996757 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.996772 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:13 crc kubenswrapper[4792]: I1127 17:11:13.996781 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:13Z","lastTransitionTime":"2025-11-27T17:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.100010 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.100104 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.100122 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.100144 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.100160 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:14Z","lastTransitionTime":"2025-11-27T17:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.202324 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.202399 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.202424 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.202477 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.202501 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:14Z","lastTransitionTime":"2025-11-27T17:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.304881 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.304922 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.304931 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.304944 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.304953 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:14Z","lastTransitionTime":"2025-11-27T17:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.408225 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.408264 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.408277 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.408292 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.408302 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:14Z","lastTransitionTime":"2025-11-27T17:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.510501 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.510537 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.510545 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.510558 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.510569 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:14Z","lastTransitionTime":"2025-11-27T17:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.612769 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.612810 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.612823 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.612852 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.612863 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:14Z","lastTransitionTime":"2025-11-27T17:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.685944 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.686035 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:14 crc kubenswrapper[4792]: E1127 17:11:14.686090 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:14 crc kubenswrapper[4792]: E1127 17:11:14.686193 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.715622 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.715687 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.715701 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.715717 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.715727 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:14Z","lastTransitionTime":"2025-11-27T17:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.818692 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.818742 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.818755 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.818771 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.818782 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:14Z","lastTransitionTime":"2025-11-27T17:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.921549 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.921579 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.921613 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.921626 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:14 crc kubenswrapper[4792]: I1127 17:11:14.921634 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:14Z","lastTransitionTime":"2025-11-27T17:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.024627 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.024690 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.024701 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.024717 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.024729 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:15Z","lastTransitionTime":"2025-11-27T17:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.127938 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.127988 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.127998 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.128018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.128030 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:15Z","lastTransitionTime":"2025-11-27T17:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.230868 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.230906 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.230914 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.230927 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.230938 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:15Z","lastTransitionTime":"2025-11-27T17:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.333564 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.333621 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.333632 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.333658 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.333668 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:15Z","lastTransitionTime":"2025-11-27T17:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.436654 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.436699 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.436708 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.436725 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.436737 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:15Z","lastTransitionTime":"2025-11-27T17:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.539511 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.539561 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.539577 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.539595 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.539608 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:15Z","lastTransitionTime":"2025-11-27T17:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.642763 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.642837 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.642853 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.642879 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.642896 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:15Z","lastTransitionTime":"2025-11-27T17:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.685943 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.685958 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:15 crc kubenswrapper[4792]: E1127 17:11:15.686213 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:15 crc kubenswrapper[4792]: E1127 17:11:15.687142 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.687499 4792 scope.go:117] "RemoveContainer" containerID="40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b" Nov 27 17:11:15 crc kubenswrapper[4792]: E1127 17:11:15.687761 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.746144 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.746217 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.746231 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.746249 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.746261 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:15Z","lastTransitionTime":"2025-11-27T17:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.849710 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.849754 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.849768 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.849789 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.849807 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:15Z","lastTransitionTime":"2025-11-27T17:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.951402 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.951442 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.951453 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.951468 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:15 crc kubenswrapper[4792]: I1127 17:11:15.951479 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:15Z","lastTransitionTime":"2025-11-27T17:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.053619 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.053672 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.053680 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.053694 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.053704 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:16Z","lastTransitionTime":"2025-11-27T17:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.089461 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.089523 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.089532 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.089544 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.089556 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:16Z","lastTransitionTime":"2025-11-27T17:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:16 crc kubenswrapper[4792]: E1127 17:11:16.102441 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:16Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.106190 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.106219 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.106227 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.106240 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.106251 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:16Z","lastTransitionTime":"2025-11-27T17:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:16 crc kubenswrapper[4792]: E1127 17:11:16.119719 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:16Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.123238 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.123266 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.123279 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.123295 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.123305 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:16Z","lastTransitionTime":"2025-11-27T17:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:16 crc kubenswrapper[4792]: E1127 17:11:16.135720 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:16Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.141297 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.141346 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.141359 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.141374 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.141387 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:16Z","lastTransitionTime":"2025-11-27T17:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:16 crc kubenswrapper[4792]: E1127 17:11:16.154546 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:16Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.157611 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.157671 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.157684 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.157702 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.157712 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:16Z","lastTransitionTime":"2025-11-27T17:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:16 crc kubenswrapper[4792]: E1127 17:11:16.169214 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-27T17:11:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"0c8b1357-53ae-4c5b-a3c1-1529964132d7\\\",\\\"systemUUID\\\":\\\"065c870f-af17-4bc7-a39f-66fec82f3422\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-27T17:11:16Z is after 2025-08-24T17:21:41Z" Nov 27 17:11:16 crc kubenswrapper[4792]: E1127 17:11:16.169340 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.171220 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.171256 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.171269 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.171325 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.171340 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:16Z","lastTransitionTime":"2025-11-27T17:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.273555 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.273593 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.273602 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.273616 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.273626 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:16Z","lastTransitionTime":"2025-11-27T17:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.376331 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.376383 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.376395 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.376413 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.376855 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:16Z","lastTransitionTime":"2025-11-27T17:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.479593 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.479677 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.479695 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.479723 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.479739 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:16Z","lastTransitionTime":"2025-11-27T17:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.582795 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.582874 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.582898 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.582933 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.582958 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:16Z","lastTransitionTime":"2025-11-27T17:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.685728 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.685778 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:16 crc kubenswrapper[4792]: E1127 17:11:16.685896 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:16 crc kubenswrapper[4792]: E1127 17:11:16.686066 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.686170 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.686228 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.686240 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.686257 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.686267 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:16Z","lastTransitionTime":"2025-11-27T17:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.789207 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.789266 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.789284 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.789307 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.789325 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:16Z","lastTransitionTime":"2025-11-27T17:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.892619 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.892693 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.892706 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.892732 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.892754 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:16Z","lastTransitionTime":"2025-11-27T17:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.995562 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.995631 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.995662 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.995684 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:16 crc kubenswrapper[4792]: I1127 17:11:16.995702 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:16Z","lastTransitionTime":"2025-11-27T17:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.099394 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.099491 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.099517 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.099553 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.099575 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:17Z","lastTransitionTime":"2025-11-27T17:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.202308 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.202347 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.202360 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.202376 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.202386 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:17Z","lastTransitionTime":"2025-11-27T17:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.305297 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.305349 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.305363 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.305377 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.305386 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:17Z","lastTransitionTime":"2025-11-27T17:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.408207 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.408274 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.408291 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.408313 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.408335 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:17Z","lastTransitionTime":"2025-11-27T17:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.511893 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.511950 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.511960 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.511979 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.511991 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:17Z","lastTransitionTime":"2025-11-27T17:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.615523 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.615562 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.615571 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.615592 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.615601 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:17Z","lastTransitionTime":"2025-11-27T17:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.686021 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.686062 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:17 crc kubenswrapper[4792]: E1127 17:11:17.686195 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:17 crc kubenswrapper[4792]: E1127 17:11:17.686662 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.718447 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.718547 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.718559 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.718578 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.718591 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:17Z","lastTransitionTime":"2025-11-27T17:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.821059 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.821096 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.821108 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.821125 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.821137 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:17Z","lastTransitionTime":"2025-11-27T17:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.924001 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.924048 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.924058 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.924079 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:17 crc kubenswrapper[4792]: I1127 17:11:17.924091 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:17Z","lastTransitionTime":"2025-11-27T17:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.026772 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.026824 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.026841 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.026861 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.026879 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:18Z","lastTransitionTime":"2025-11-27T17:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.129820 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.129856 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.129866 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.129880 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.129888 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:18Z","lastTransitionTime":"2025-11-27T17:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.232531 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.232567 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.232576 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.232592 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.232603 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:18Z","lastTransitionTime":"2025-11-27T17:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.335402 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.335451 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.335460 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.335474 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.335484 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:18Z","lastTransitionTime":"2025-11-27T17:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.438065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.438107 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.438122 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.438142 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.438158 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:18Z","lastTransitionTime":"2025-11-27T17:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.541206 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.541250 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.541261 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.541276 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.541286 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:18Z","lastTransitionTime":"2025-11-27T17:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.644152 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.644216 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.644228 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.644246 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.644258 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:18Z","lastTransitionTime":"2025-11-27T17:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.685755 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:18 crc kubenswrapper[4792]: E1127 17:11:18.685970 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.686051 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:18 crc kubenswrapper[4792]: E1127 17:11:18.686223 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.747165 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.747209 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.747218 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.747232 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.747241 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:18Z","lastTransitionTime":"2025-11-27T17:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.772602 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.772585235 podStartE2EDuration="1m18.772585235s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:11:18.770447745 +0000 UTC m=+101.113274063" watchObservedRunningTime="2025-11-27 17:11:18.772585235 +0000 UTC m=+101.115411553" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.805218 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-v6h2x" podStartSLOduration=81.80519693 podStartE2EDuration="1m21.80519693s" podCreationTimestamp="2025-11-27 17:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:11:18.804891513 +0000 UTC m=+101.147717831" watchObservedRunningTime="2025-11-27 17:11:18.80519693 +0000 UTC m=+101.148023248" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.828293 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podStartSLOduration=79.828275452 podStartE2EDuration="1m19.828275452s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:11:18.827807641 +0000 UTC m=+101.170633959" watchObservedRunningTime="2025-11-27 17:11:18.828275452 +0000 UTC m=+101.171101770" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.848447 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.848503 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.848514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.848532 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.848543 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:18Z","lastTransitionTime":"2025-11-27T17:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.862190 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gbrqr" podStartSLOduration=79.862168188 podStartE2EDuration="1m19.862168188s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:11:18.843284525 +0000 UTC m=+101.186110843" watchObservedRunningTime="2025-11-27 17:11:18.862168188 +0000 UTC m=+101.204994516" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.913221 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=13.913201576 podStartE2EDuration="13.913201576s" podCreationTimestamp="2025-11-27 17:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:11:18.898696176 +0000 UTC m=+101.241522514" watchObservedRunningTime="2025-11-27 17:11:18.913201576 +0000 UTC m=+101.256027894" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.923870 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.923848096 podStartE2EDuration="49.923848096s" podCreationTimestamp="2025-11-27 17:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:11:18.923627061 +0000 UTC m=+101.266453379" watchObservedRunningTime="2025-11-27 17:11:18.923848096 +0000 UTC m=+101.266674414" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.939175 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2dr66" podStartSLOduration=79.939156115 podStartE2EDuration="1m19.939156115s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:11:18.93892567 +0000 UTC m=+101.281751998" watchObservedRunningTime="2025-11-27 17:11:18.939156115 +0000 UTC m=+101.281982433" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.950578 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-q5zkv" podStartSLOduration=79.950558443 podStartE2EDuration="1m19.950558443s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:11:18.950121703 +0000 UTC m=+101.292948031" watchObservedRunningTime="2025-11-27 17:11:18.950558443 +0000 UTC m=+101.293384761" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.951133 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.951175 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.951191 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.951209 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.951221 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:18Z","lastTransitionTime":"2025-11-27T17:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.978874 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-slvg6" podStartSLOduration=78.978852297 podStartE2EDuration="1m18.978852297s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:11:18.963091537 +0000 UTC m=+101.305917865" watchObservedRunningTime="2025-11-27 17:11:18.978852297 +0000 UTC m=+101.321678615" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.979417 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.979409991 podStartE2EDuration="1m21.979409991s" podCreationTimestamp="2025-11-27 17:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:11:18.978710884 +0000 UTC m=+101.321537202" watchObservedRunningTime="2025-11-27 17:11:18.979409991 +0000 UTC m=+101.322236319" Nov 27 17:11:18 crc kubenswrapper[4792]: I1127 17:11:18.990180 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=80.990163173 podStartE2EDuration="1m20.990163173s" podCreationTimestamp="2025-11-27 17:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:11:18.98960502 +0000 UTC m=+101.332431338" watchObservedRunningTime="2025-11-27 17:11:18.990163173 +0000 UTC m=+101.332989491" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.053458 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.053501 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.053513 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.053529 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.053541 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:19Z","lastTransitionTime":"2025-11-27T17:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.155931 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.155980 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.155992 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.156005 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.156013 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:19Z","lastTransitionTime":"2025-11-27T17:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.258315 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.258370 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.258389 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.258411 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.258428 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:19Z","lastTransitionTime":"2025-11-27T17:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.360774 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.360813 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.360823 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.360838 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.360852 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:19Z","lastTransitionTime":"2025-11-27T17:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.464136 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.464189 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.464203 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.464221 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.464235 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:19Z","lastTransitionTime":"2025-11-27T17:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.566625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.566704 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.566722 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.566746 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.566763 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:19Z","lastTransitionTime":"2025-11-27T17:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.669856 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.669904 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.669917 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.669935 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.669946 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:19Z","lastTransitionTime":"2025-11-27T17:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.686603 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:19 crc kubenswrapper[4792]: E1127 17:11:19.686750 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.686603 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:19 crc kubenswrapper[4792]: E1127 17:11:19.686826 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.772338 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.772366 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.772373 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.772385 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.772439 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:19Z","lastTransitionTime":"2025-11-27T17:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.875839 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.875903 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.875920 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.875940 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.875959 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:19Z","lastTransitionTime":"2025-11-27T17:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.979019 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.979089 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.979117 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.979138 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:19 crc kubenswrapper[4792]: I1127 17:11:19.979151 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:19Z","lastTransitionTime":"2025-11-27T17:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.083523 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.083575 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.083591 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.083615 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.083632 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:20Z","lastTransitionTime":"2025-11-27T17:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.186632 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.186727 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.186751 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.186780 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.186803 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:20Z","lastTransitionTime":"2025-11-27T17:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.346324 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.346369 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.346383 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.346405 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.346421 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:20Z","lastTransitionTime":"2025-11-27T17:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.448991 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.449033 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.449044 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.449059 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.449071 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:20Z","lastTransitionTime":"2025-11-27T17:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.551220 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.551312 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.551323 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.551338 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.551351 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:20Z","lastTransitionTime":"2025-11-27T17:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.654732 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.654813 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.654833 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.654854 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.654870 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:20Z","lastTransitionTime":"2025-11-27T17:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.685858 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:20 crc kubenswrapper[4792]: E1127 17:11:20.686021 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.685871 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:20 crc kubenswrapper[4792]: E1127 17:11:20.686387 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.758317 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.758394 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.758418 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.758448 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.758471 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:20Z","lastTransitionTime":"2025-11-27T17:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.861238 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.861347 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.861367 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.861394 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.861413 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:20Z","lastTransitionTime":"2025-11-27T17:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.964682 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.964774 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.964807 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.964838 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:20 crc kubenswrapper[4792]: I1127 17:11:20.964858 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:20Z","lastTransitionTime":"2025-11-27T17:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.067950 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.068021 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.068039 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.068065 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.068084 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:21Z","lastTransitionTime":"2025-11-27T17:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.170485 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.170551 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.170570 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.170591 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.170604 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:21Z","lastTransitionTime":"2025-11-27T17:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.273231 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.273295 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.273307 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.273322 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.273333 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:21Z","lastTransitionTime":"2025-11-27T17:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.375251 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.375299 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.375310 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.375326 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.375336 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:21Z","lastTransitionTime":"2025-11-27T17:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.478909 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.478954 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.478965 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.478981 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.478992 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:21Z","lastTransitionTime":"2025-11-27T17:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.581206 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.581258 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.581270 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.581290 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.581302 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:21Z","lastTransitionTime":"2025-11-27T17:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.683825 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.683858 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.683867 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.683881 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.683890 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:21Z","lastTransitionTime":"2025-11-27T17:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.686127 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.686182 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:21 crc kubenswrapper[4792]: E1127 17:11:21.686276 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:21 crc kubenswrapper[4792]: E1127 17:11:21.686352 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.786725 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.786796 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.786814 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.786842 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.786861 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:21Z","lastTransitionTime":"2025-11-27T17:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.889856 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.889912 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.889924 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.889942 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.889955 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:21Z","lastTransitionTime":"2025-11-27T17:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.993140 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.993186 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.993198 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.993217 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:21 crc kubenswrapper[4792]: I1127 17:11:21.993230 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:21Z","lastTransitionTime":"2025-11-27T17:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.095753 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.095837 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.095853 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.095877 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.095892 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:22Z","lastTransitionTime":"2025-11-27T17:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.198271 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.198316 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.198328 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.198346 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.198358 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:22Z","lastTransitionTime":"2025-11-27T17:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.300791 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.300828 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.300836 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.300850 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.300860 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:22Z","lastTransitionTime":"2025-11-27T17:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.403950 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.403986 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.403999 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.404016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.404029 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:22Z","lastTransitionTime":"2025-11-27T17:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.507514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.507569 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.507581 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.507597 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.507607 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:22Z","lastTransitionTime":"2025-11-27T17:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.611189 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.611260 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.611278 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.611306 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.611325 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:22Z","lastTransitionTime":"2025-11-27T17:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.686817 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.686936 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:22 crc kubenswrapper[4792]: E1127 17:11:22.687080 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:22 crc kubenswrapper[4792]: E1127 17:11:22.687149 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.714927 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.714988 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.714999 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.715019 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.715032 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:22Z","lastTransitionTime":"2025-11-27T17:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.818379 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.818440 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.818457 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.818481 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.818500 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:22Z","lastTransitionTime":"2025-11-27T17:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.921166 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.921236 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.921252 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.921273 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:22 crc kubenswrapper[4792]: I1127 17:11:22.921288 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:22Z","lastTransitionTime":"2025-11-27T17:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.024352 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.024432 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.024458 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.024486 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.024510 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:23Z","lastTransitionTime":"2025-11-27T17:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.127452 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.127502 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.127514 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.127532 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.127544 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:23Z","lastTransitionTime":"2025-11-27T17:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.231007 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.231061 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.231077 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.231093 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.231104 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:23Z","lastTransitionTime":"2025-11-27T17:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.333327 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.333370 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.333381 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.333398 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.333407 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:23Z","lastTransitionTime":"2025-11-27T17:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.435944 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.436022 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.436060 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.436092 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.436114 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:23Z","lastTransitionTime":"2025-11-27T17:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.540119 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.540188 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.540212 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.540244 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.540267 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:23Z","lastTransitionTime":"2025-11-27T17:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.644343 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.645347 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.645363 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.645380 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.645392 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:23Z","lastTransitionTime":"2025-11-27T17:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.685964 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.685988 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:23 crc kubenswrapper[4792]: E1127 17:11:23.686196 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:23 crc kubenswrapper[4792]: E1127 17:11:23.686107 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.748272 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.748319 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.748337 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.748353 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.748363 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:23Z","lastTransitionTime":"2025-11-27T17:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.850460 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.850501 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.850513 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.850531 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.850541 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:23Z","lastTransitionTime":"2025-11-27T17:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.952841 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.952939 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.952955 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.952975 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:23 crc kubenswrapper[4792]: I1127 17:11:23.952987 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:23Z","lastTransitionTime":"2025-11-27T17:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.062625 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.062689 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.062704 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.062721 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.062732 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:24Z","lastTransitionTime":"2025-11-27T17:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.165153 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.165193 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.165203 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.165216 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.165226 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:24Z","lastTransitionTime":"2025-11-27T17:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.267797 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.268037 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.268050 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.268068 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.268081 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:24Z","lastTransitionTime":"2025-11-27T17:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.370308 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.370344 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.370352 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.370369 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.370389 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:24Z","lastTransitionTime":"2025-11-27T17:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.472818 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.472884 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.472906 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.472936 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.472959 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:24Z","lastTransitionTime":"2025-11-27T17:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.575225 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.575268 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.575280 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.575294 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.575306 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:24Z","lastTransitionTime":"2025-11-27T17:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.677135 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.677178 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.677189 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.677200 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.677210 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:24Z","lastTransitionTime":"2025-11-27T17:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.686728 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.686736 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:24 crc kubenswrapper[4792]: E1127 17:11:24.687018 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:24 crc kubenswrapper[4792]: E1127 17:11:24.687223 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.780873 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.780939 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.780949 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.780970 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.780986 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:24Z","lastTransitionTime":"2025-11-27T17:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.884163 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.884217 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.884231 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.884255 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.884271 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:24Z","lastTransitionTime":"2025-11-27T17:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.987486 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.987566 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.987592 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.987623 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:24 crc kubenswrapper[4792]: I1127 17:11:24.987752 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:24Z","lastTransitionTime":"2025-11-27T17:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.093928 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.094002 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.094020 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.094048 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.094072 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:25Z","lastTransitionTime":"2025-11-27T17:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.196695 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.196989 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.197176 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.197274 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.197364 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:25Z","lastTransitionTime":"2025-11-27T17:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.300566 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.300605 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.300614 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.300626 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.300635 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:25Z","lastTransitionTime":"2025-11-27T17:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.403870 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.403948 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.403971 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.404003 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.404026 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:25Z","lastTransitionTime":"2025-11-27T17:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.506984 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.507037 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.507055 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.507081 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.507099 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:25Z","lastTransitionTime":"2025-11-27T17:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.611154 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.611242 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.611487 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.611530 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.611574 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:25Z","lastTransitionTime":"2025-11-27T17:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.686691 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.686797 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:25 crc kubenswrapper[4792]: E1127 17:11:25.687089 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:25 crc kubenswrapper[4792]: E1127 17:11:25.687292 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.714801 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.714869 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.714888 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.714914 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.714933 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:25Z","lastTransitionTime":"2025-11-27T17:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.817480 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.817937 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.818118 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.818338 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.818534 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:25Z","lastTransitionTime":"2025-11-27T17:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.921226 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.921342 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.921362 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.921422 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:25 crc kubenswrapper[4792]: I1127 17:11:25.921441 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:25Z","lastTransitionTime":"2025-11-27T17:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.023904 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.023987 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.024016 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.024050 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.024075 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:26Z","lastTransitionTime":"2025-11-27T17:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.127578 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.127630 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.127663 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.127677 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.127686 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:26Z","lastTransitionTime":"2025-11-27T17:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.230508 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.230574 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.230594 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.230618 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.230638 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:26Z","lastTransitionTime":"2025-11-27T17:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.333964 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.334018 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.334054 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.334073 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.334087 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:26Z","lastTransitionTime":"2025-11-27T17:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.353135 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.353209 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.353223 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.353242 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.353254 4792 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-27T17:11:26Z","lastTransitionTime":"2025-11-27T17:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.418934 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb"] Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.419303 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.423061 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.423118 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.423069 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.424491 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.515614 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12c29a6f-c31c-43dc-b363-9cec1eb23c7a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xdtrb\" (UID: \"12c29a6f-c31c-43dc-b363-9cec1eb23c7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.515850 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c29a6f-c31c-43dc-b363-9cec1eb23c7a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xdtrb\" (UID: \"12c29a6f-c31c-43dc-b363-9cec1eb23c7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.515945 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/12c29a6f-c31c-43dc-b363-9cec1eb23c7a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xdtrb\" (UID: \"12c29a6f-c31c-43dc-b363-9cec1eb23c7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.516092 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/12c29a6f-c31c-43dc-b363-9cec1eb23c7a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xdtrb\" (UID: \"12c29a6f-c31c-43dc-b363-9cec1eb23c7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.516179 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/12c29a6f-c31c-43dc-b363-9cec1eb23c7a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xdtrb\" (UID: \"12c29a6f-c31c-43dc-b363-9cec1eb23c7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.617353 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c29a6f-c31c-43dc-b363-9cec1eb23c7a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xdtrb\" (UID: \"12c29a6f-c31c-43dc-b363-9cec1eb23c7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.617422 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/12c29a6f-c31c-43dc-b363-9cec1eb23c7a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xdtrb\" (UID: \"12c29a6f-c31c-43dc-b363-9cec1eb23c7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.617513 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/12c29a6f-c31c-43dc-b363-9cec1eb23c7a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xdtrb\" (UID: \"12c29a6f-c31c-43dc-b363-9cec1eb23c7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.617550 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/12c29a6f-c31c-43dc-b363-9cec1eb23c7a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xdtrb\" (UID: \"12c29a6f-c31c-43dc-b363-9cec1eb23c7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.617588 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12c29a6f-c31c-43dc-b363-9cec1eb23c7a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xdtrb\" (UID: \"12c29a6f-c31c-43dc-b363-9cec1eb23c7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.617715 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/12c29a6f-c31c-43dc-b363-9cec1eb23c7a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xdtrb\" (UID: \"12c29a6f-c31c-43dc-b363-9cec1eb23c7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.617799 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/12c29a6f-c31c-43dc-b363-9cec1eb23c7a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xdtrb\" (UID: \"12c29a6f-c31c-43dc-b363-9cec1eb23c7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.619833 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/12c29a6f-c31c-43dc-b363-9cec1eb23c7a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xdtrb\" (UID: \"12c29a6f-c31c-43dc-b363-9cec1eb23c7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.631239 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12c29a6f-c31c-43dc-b363-9cec1eb23c7a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xdtrb\" (UID: \"12c29a6f-c31c-43dc-b363-9cec1eb23c7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.648398 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12c29a6f-c31c-43dc-b363-9cec1eb23c7a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xdtrb\" (UID: \"12c29a6f-c31c-43dc-b363-9cec1eb23c7a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.686319 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.686365 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:26 crc kubenswrapper[4792]: E1127 17:11:26.686842 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:26 crc kubenswrapper[4792]: E1127 17:11:26.687233 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:26 crc kubenswrapper[4792]: I1127 17:11:26.736323 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" Nov 27 17:11:27 crc kubenswrapper[4792]: I1127 17:11:27.234056 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" event={"ID":"12c29a6f-c31c-43dc-b363-9cec1eb23c7a","Type":"ContainerStarted","Data":"0500b8daa139136d6a9799d755ebc42cefc7cc99e474176a8a3c69228c8e3b44"} Nov 27 17:11:27 crc kubenswrapper[4792]: I1127 17:11:27.234132 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" event={"ID":"12c29a6f-c31c-43dc-b363-9cec1eb23c7a","Type":"ContainerStarted","Data":"4dbc9652904c1356446344a3ce14fd0dc69f91dd0531231f3ccba2ef51ad3064"} Nov 27 17:11:27 crc kubenswrapper[4792]: I1127 17:11:27.254117 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xdtrb" podStartSLOduration=88.254083856 podStartE2EDuration="1m28.254083856s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:11:27.253721418 +0000 UTC m=+109.596547756" watchObservedRunningTime="2025-11-27 17:11:27.254083856 +0000 UTC m=+109.596910174" Nov 27 17:11:27 crc kubenswrapper[4792]: I1127 17:11:27.686188 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:27 crc kubenswrapper[4792]: I1127 17:11:27.686264 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:27 crc kubenswrapper[4792]: E1127 17:11:27.686420 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:27 crc kubenswrapper[4792]: E1127 17:11:27.686570 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:28 crc kubenswrapper[4792]: I1127 17:11:28.685816 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:28 crc kubenswrapper[4792]: I1127 17:11:28.685863 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:28 crc kubenswrapper[4792]: E1127 17:11:28.687148 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:28 crc kubenswrapper[4792]: E1127 17:11:28.687367 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:29 crc kubenswrapper[4792]: I1127 17:11:29.686144 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:29 crc kubenswrapper[4792]: I1127 17:11:29.686212 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:29 crc kubenswrapper[4792]: E1127 17:11:29.686307 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:29 crc kubenswrapper[4792]: E1127 17:11:29.686439 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:30 crc kubenswrapper[4792]: I1127 17:11:30.685832 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:30 crc kubenswrapper[4792]: E1127 17:11:30.686084 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:30 crc kubenswrapper[4792]: I1127 17:11:30.686161 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:30 crc kubenswrapper[4792]: E1127 17:11:30.686787 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:30 crc kubenswrapper[4792]: I1127 17:11:30.687171 4792 scope.go:117] "RemoveContainer" containerID="40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b" Nov 27 17:11:30 crc kubenswrapper[4792]: E1127 17:11:30.687458 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vkjf7_openshift-ovn-kubernetes(cd5ee573-9a50-4d09-b129-fb461db20cf6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" Nov 27 17:11:31 crc kubenswrapper[4792]: I1127 17:11:31.686604 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:31 crc kubenswrapper[4792]: I1127 17:11:31.686711 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:31 crc kubenswrapper[4792]: E1127 17:11:31.686854 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:31 crc kubenswrapper[4792]: E1127 17:11:31.686730 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:32 crc kubenswrapper[4792]: I1127 17:11:32.685903 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:32 crc kubenswrapper[4792]: I1127 17:11:32.685981 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:32 crc kubenswrapper[4792]: E1127 17:11:32.686109 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:32 crc kubenswrapper[4792]: E1127 17:11:32.686678 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:33 crc kubenswrapper[4792]: I1127 17:11:33.255083 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gbrqr_71907161-f8b0-4b44-b61a-0e04200083f0/kube-multus/1.log" Nov 27 17:11:33 crc kubenswrapper[4792]: I1127 17:11:33.255688 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gbrqr_71907161-f8b0-4b44-b61a-0e04200083f0/kube-multus/0.log" Nov 27 17:11:33 crc kubenswrapper[4792]: I1127 17:11:33.255749 4792 generic.go:334] "Generic (PLEG): container finished" podID="71907161-f8b0-4b44-b61a-0e04200083f0" containerID="0b5d62f1bb8549b4cab2f42f0a65d518448affc1b1a72191be9f84521bd247b9" exitCode=1 Nov 27 17:11:33 crc kubenswrapper[4792]: I1127 17:11:33.255790 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gbrqr" event={"ID":"71907161-f8b0-4b44-b61a-0e04200083f0","Type":"ContainerDied","Data":"0b5d62f1bb8549b4cab2f42f0a65d518448affc1b1a72191be9f84521bd247b9"} Nov 27 17:11:33 crc kubenswrapper[4792]: I1127 17:11:33.255839 4792 scope.go:117] "RemoveContainer" containerID="fa13ff30a6f471a8a02c66a63f1b114eca8fc9a9e9abc7ccb0e503db62be63d7" Nov 27 17:11:33 crc kubenswrapper[4792]: I1127 17:11:33.256329 4792 scope.go:117] "RemoveContainer" containerID="0b5d62f1bb8549b4cab2f42f0a65d518448affc1b1a72191be9f84521bd247b9" Nov 27 17:11:33 crc kubenswrapper[4792]: E1127 17:11:33.256562 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gbrqr_openshift-multus(71907161-f8b0-4b44-b61a-0e04200083f0)\"" pod="openshift-multus/multus-gbrqr" podUID="71907161-f8b0-4b44-b61a-0e04200083f0" Nov 27 17:11:33 crc kubenswrapper[4792]: I1127 17:11:33.686193 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:33 crc kubenswrapper[4792]: I1127 17:11:33.686221 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:33 crc kubenswrapper[4792]: E1127 17:11:33.686379 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:33 crc kubenswrapper[4792]: E1127 17:11:33.686504 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:34 crc kubenswrapper[4792]: I1127 17:11:34.261900 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gbrqr_71907161-f8b0-4b44-b61a-0e04200083f0/kube-multus/1.log" Nov 27 17:11:34 crc kubenswrapper[4792]: I1127 17:11:34.685908 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:34 crc kubenswrapper[4792]: E1127 17:11:34.686505 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:34 crc kubenswrapper[4792]: I1127 17:11:34.685974 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:34 crc kubenswrapper[4792]: E1127 17:11:34.686914 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:35 crc kubenswrapper[4792]: I1127 17:11:35.686208 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:35 crc kubenswrapper[4792]: I1127 17:11:35.686209 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:35 crc kubenswrapper[4792]: E1127 17:11:35.686383 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:35 crc kubenswrapper[4792]: E1127 17:11:35.686531 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:36 crc kubenswrapper[4792]: I1127 17:11:36.685871 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:36 crc kubenswrapper[4792]: E1127 17:11:36.686059 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:36 crc kubenswrapper[4792]: I1127 17:11:36.686126 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:36 crc kubenswrapper[4792]: E1127 17:11:36.686631 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:37 crc kubenswrapper[4792]: I1127 17:11:37.685996 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:37 crc kubenswrapper[4792]: I1127 17:11:37.686062 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:37 crc kubenswrapper[4792]: E1127 17:11:37.686124 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:37 crc kubenswrapper[4792]: E1127 17:11:37.686185 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:38 crc kubenswrapper[4792]: I1127 17:11:38.696721 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:38 crc kubenswrapper[4792]: I1127 17:11:38.696743 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:38 crc kubenswrapper[4792]: E1127 17:11:38.696860 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:38 crc kubenswrapper[4792]: E1127 17:11:38.697758 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:38 crc kubenswrapper[4792]: E1127 17:11:38.703352 4792 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 27 17:11:38 crc kubenswrapper[4792]: E1127 17:11:38.844363 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:11:39 crc kubenswrapper[4792]: I1127 17:11:39.686141 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:39 crc kubenswrapper[4792]: I1127 17:11:39.686167 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:39 crc kubenswrapper[4792]: E1127 17:11:39.686300 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:39 crc kubenswrapper[4792]: E1127 17:11:39.686495 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:40 crc kubenswrapper[4792]: I1127 17:11:40.686709 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:40 crc kubenswrapper[4792]: I1127 17:11:40.686787 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:40 crc kubenswrapper[4792]: E1127 17:11:40.686898 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:40 crc kubenswrapper[4792]: E1127 17:11:40.687060 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:41 crc kubenswrapper[4792]: I1127 17:11:41.686479 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:41 crc kubenswrapper[4792]: I1127 17:11:41.686702 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:41 crc kubenswrapper[4792]: E1127 17:11:41.686731 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:41 crc kubenswrapper[4792]: E1127 17:11:41.687076 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:42 crc kubenswrapper[4792]: I1127 17:11:42.685872 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:42 crc kubenswrapper[4792]: I1127 17:11:42.685912 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:42 crc kubenswrapper[4792]: E1127 17:11:42.686174 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:42 crc kubenswrapper[4792]: E1127 17:11:42.686331 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:43 crc kubenswrapper[4792]: I1127 17:11:43.685985 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:43 crc kubenswrapper[4792]: I1127 17:11:43.686035 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:43 crc kubenswrapper[4792]: E1127 17:11:43.686188 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:43 crc kubenswrapper[4792]: E1127 17:11:43.686455 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:43 crc kubenswrapper[4792]: E1127 17:11:43.846448 4792 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:11:44 crc kubenswrapper[4792]: I1127 17:11:44.686318 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:44 crc kubenswrapper[4792]: I1127 17:11:44.686362 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:44 crc kubenswrapper[4792]: E1127 17:11:44.686779 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:44 crc kubenswrapper[4792]: E1127 17:11:44.686860 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:44 crc kubenswrapper[4792]: I1127 17:11:44.686997 4792 scope.go:117] "RemoveContainer" containerID="40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b" Nov 27 17:11:45 crc kubenswrapper[4792]: I1127 17:11:45.303735 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovnkube-controller/3.log" Nov 27 17:11:45 crc kubenswrapper[4792]: I1127 17:11:45.307315 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerStarted","Data":"557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa"} Nov 27 17:11:45 crc kubenswrapper[4792]: I1127 17:11:45.307750 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:11:45 crc kubenswrapper[4792]: I1127 17:11:45.340127 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podStartSLOduration=106.340098362 podStartE2EDuration="1m46.340098362s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:11:45.339229752 +0000 UTC m=+127.682056110" watchObservedRunningTime="2025-11-27 17:11:45.340098362 +0000 UTC m=+127.682924730" Nov 27 17:11:45 crc kubenswrapper[4792]: I1127 17:11:45.628003 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5qmhg"] Nov 27 17:11:45 crc kubenswrapper[4792]: I1127 17:11:45.628160 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:45 crc kubenswrapper[4792]: E1127 17:11:45.628299 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:45 crc kubenswrapper[4792]: I1127 17:11:45.686276 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:45 crc kubenswrapper[4792]: E1127 17:11:45.686438 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:45 crc kubenswrapper[4792]: I1127 17:11:45.686301 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:45 crc kubenswrapper[4792]: I1127 17:11:45.686624 4792 scope.go:117] "RemoveContainer" containerID="0b5d62f1bb8549b4cab2f42f0a65d518448affc1b1a72191be9f84521bd247b9" Nov 27 17:11:45 crc kubenswrapper[4792]: E1127 17:11:45.686720 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:46 crc kubenswrapper[4792]: I1127 17:11:46.313533 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gbrqr_71907161-f8b0-4b44-b61a-0e04200083f0/kube-multus/1.log" Nov 27 17:11:46 crc kubenswrapper[4792]: I1127 17:11:46.313615 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gbrqr" event={"ID":"71907161-f8b0-4b44-b61a-0e04200083f0","Type":"ContainerStarted","Data":"bf643a2c9717f4792a8748a68706497c448539b1e60802be4934fccee2c8e838"} Nov 27 17:11:46 crc kubenswrapper[4792]: I1127 17:11:46.686242 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:46 crc kubenswrapper[4792]: E1127 17:11:46.686364 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:47 crc kubenswrapper[4792]: I1127 17:11:47.685791 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:47 crc kubenswrapper[4792]: I1127 17:11:47.685932 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:47 crc kubenswrapper[4792]: E1127 17:11:47.686105 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 27 17:11:47 crc kubenswrapper[4792]: E1127 17:11:47.686304 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 27 17:11:47 crc kubenswrapper[4792]: I1127 17:11:47.686922 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:47 crc kubenswrapper[4792]: E1127 17:11:47.687125 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qmhg" podUID="2ec75c0b-1943-49d4-8813-bf8cc5218511" Nov 27 17:11:48 crc kubenswrapper[4792]: I1127 17:11:48.686725 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:48 crc kubenswrapper[4792]: E1127 17:11:48.688587 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 27 17:11:49 crc kubenswrapper[4792]: I1127 17:11:49.686128 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:11:49 crc kubenswrapper[4792]: I1127 17:11:49.686177 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:11:49 crc kubenswrapper[4792]: I1127 17:11:49.686198 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:11:49 crc kubenswrapper[4792]: I1127 17:11:49.690064 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 27 17:11:49 crc kubenswrapper[4792]: I1127 17:11:49.690199 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 27 17:11:49 crc kubenswrapper[4792]: I1127 17:11:49.691012 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 27 17:11:49 crc kubenswrapper[4792]: I1127 17:11:49.691158 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 27 17:11:49 crc kubenswrapper[4792]: I1127 17:11:49.691168 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 27 17:11:49 crc kubenswrapper[4792]: I1127 17:11:49.691603 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 27 17:11:50 crc kubenswrapper[4792]: I1127 17:11:50.686824 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:11:50 crc kubenswrapper[4792]: I1127 17:11:50.936494 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:11:56 crc kubenswrapper[4792]: I1127 17:11:56.963992 4792 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.019608 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6lwpq"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.020488 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.020857 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-smxkj"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.021394 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.021911 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.022507 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.022871 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g7kpw"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.023445 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.023959 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-z84xw"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.024274 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.025826 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.026385 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.027839 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.031370 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.031496 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.031611 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.031702 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.032308 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.037882 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.038008 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.038022 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.038816 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.038864 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.038895 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.039027 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.039391 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.039813 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.040075 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.040182 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.040285 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.040442 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.040463 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.040597 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.040951 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.041118 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.041248 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.041495 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8gb65"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.042030 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.042550 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.042683 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.042773 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.043138 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.043368 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.043501 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.043574 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.043612 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.043618 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.043779 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rr6k4"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.043844 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.043884 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.044017 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.044125 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.044123 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.044280 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rr6k4" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.060289 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.060963 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.061972 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jz8bh"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.062420 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.062866 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.063340 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.074413 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.076526 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.076788 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.076878 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-htmtg"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.077227 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-htmtg" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.079100 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.079128 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.079994 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.080043 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.080485 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.080868 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.081004 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.081200 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.081289 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.081468 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.084900 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.084962 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.085186 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.085854 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.085986 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.086685 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.087032 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-k86pd"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.087509 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.087844 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-r96bc"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.088120 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-r96bc" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.089524 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.089688 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.089810 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.089827 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ft9s4"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.090162 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.091126 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.091598 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.091903 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.092223 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.096511 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.098109 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.098274 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.098394 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.098505 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.099136 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.099297 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.099456 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.099558 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.099671 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.099777 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.099969 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.100052 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.100124 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.100207 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.100342 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.100435 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.100556 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.100596 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.100741 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.100845 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g7kpw"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.101685 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qgj4h"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.102284 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.103416 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.104015 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.104986 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.105240 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.105349 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.105435 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.105517 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.105622 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.105823 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.105913 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.105989 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.106036 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.106201 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.106357 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.106441 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.106508 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.106719 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.106806 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.106908 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.114744 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.117407 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.118160 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.118217 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.119407 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.120625 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.148103 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.149870 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6lwpq"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.149903 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.150516 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.150748 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qgvjw"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.151312 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.152408 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.152619 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.153719 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.153908 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2h"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.156623 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.156829 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.163128 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.163744 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.164086 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.164430 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.164451 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5jcbf"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.164959 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cx4g9"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.165378 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cx4g9" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.165558 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.165667 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.165864 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5jcbf" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.165896 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.165867 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2h" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.168236 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.169974 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7psgx"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.170586 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.171393 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7b9r2"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.174783 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7b9r2" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.187840 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.193939 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.194524 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.196321 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tswzz"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.196878 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tswzz" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.201679 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.202231 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.203432 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.203937 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.208064 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.210304 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.213088 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-4855s"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.213814 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.215094 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.215563 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-smxkj"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.215592 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4855s" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.215671 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.220807 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-r96bc"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.224457 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-htmtg"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.228589 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.229771 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.230782 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.234919 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-z84xw"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.234944 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rr6k4"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.240953 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8gb65"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.242442 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.243675 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.245836 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.250942 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qgj4h"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.252552 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.254713 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.258211 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5jcbf"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.259204 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7psgx"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.260256 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.261405 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.262470 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4855s"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.262985 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.263512 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.264507 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2h"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.265722 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.266734 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.267992 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.269419 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ft9s4"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.270514 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.271855 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-k86pd"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.272944 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jz8bh"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.273938 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fz8kn"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.274995 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tswzz"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.275011 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.276072 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.277090 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-t47sj"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.277496 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-t47sj" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.278314 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7b9r2"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.279634 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cx4g9"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.280814 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.282008 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.282170 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fz8kn"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.283418 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.284445 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.285373 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-wx7m6"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.287421 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wx7m6"] Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.287535 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wx7m6" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.302415 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.323467 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.342567 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.362396 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.382962 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.402401 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.422326 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.442692 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.462479 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.483085 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.502089 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.523220 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.542048 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.563070 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.583751 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.603333 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.623078 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.643496 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.684580 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.704383 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.723862 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.759389 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.763400 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.803678 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.823705 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.843926 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.863339 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.883299 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.903213 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.924354 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.944129 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.964410 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 27 17:11:57 crc kubenswrapper[4792]: I1127 17:11:57.983403 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.002796 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.023058 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.043756 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.064238 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.083416 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.104217 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.123887 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.142775 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.163010 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.181565 4792 request.go:700] Waited for 1.015430436s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&limit=500&resourceVersion=0 Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.183844 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.202980 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.223490 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.243048 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.263251 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.282903 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.302614 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.324148 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.343298 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.363718 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.395770 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.403968 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.423155 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.442519 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.462424 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.483102 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.502827 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.522411 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.542568 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.563856 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.583626 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.603281 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.624000 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.643296 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.663455 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.682735 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.704204 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.723448 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.742952 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.764300 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.783324 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.804187 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.823327 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.843051 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.863052 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.883103 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.903172 4792 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.923184 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.942676 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.962696 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 27 17:11:58 crc kubenswrapper[4792]: I1127 17:11:58.982998 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.003106 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.023515 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.043222 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.062985 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.083193 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194288 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7f12779a-be6f-4854-adc8-023f3da9c562-auth-proxy-config\") pod \"machine-approver-56656f9798-h5bxn\" (UID: \"7f12779a-be6f-4854-adc8-023f3da9c562\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194349 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-console-config\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194376 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bbb7ab5-a68c-402c-99d8-9cd47c361ccd-config\") pod \"machine-api-operator-5694c8668f-6lwpq\" (UID: \"7bbb7ab5-a68c-402c-99d8-9cd47c361ccd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194395 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-audit\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194414 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-audit-dir\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194436 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvwqm\" (UniqueName: \"kubernetes.io/projected/165ef6ad-2a73-4126-b694-938ecbe6cd77-kube-api-access-bvwqm\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194460 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07498167-d644-44f1-943b-fda3bd8de13d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ktks\" (UID: \"07498167-d644-44f1-943b-fda3bd8de13d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194621 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79c41335-5b2d-4567-93ab-48948dfdd24c-config\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194716 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kkdd\" (UniqueName: \"kubernetes.io/projected/39ccadf8-7389-4ffd-a72d-efd46066c233-kube-api-access-8kkdd\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4hn5\" (UID: \"39ccadf8-7389-4ffd-a72d-efd46066c233\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194752 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34727a2e-1e3a-4371-9052-7df4c6693f44-serving-cert\") pod \"route-controller-manager-6576b87f9c-klkjr\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194787 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/01e31968-eb99-4a34-a9da-25ffd0101936-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8g9jq\" (UID: \"01e31968-eb99-4a34-a9da-25ffd0101936\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194827 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7f12779a-be6f-4854-adc8-023f3da9c562-machine-approver-tls\") pod \"machine-approver-56656f9798-h5bxn\" (UID: \"7f12779a-be6f-4854-adc8-023f3da9c562\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194857 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f-trusted-ca\") pod \"console-operator-58897d9998-g7kpw\" (UID: \"1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f\") " pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194886 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-node-pullsecrets\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194917 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39ccadf8-7389-4ffd-a72d-efd46066c233-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4hn5\" (UID: \"39ccadf8-7389-4ffd-a72d-efd46066c233\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194949 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.194979 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aee2ba25-535c-4c4a-8ef4-f4a56f3b3484-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-j97g2\" (UID: \"aee2ba25-535c-4c4a-8ef4-f4a56f3b3484\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195007 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01e31968-eb99-4a34-a9da-25ffd0101936-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8g9jq\" (UID: \"01e31968-eb99-4a34-a9da-25ffd0101936\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195055 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/551f21a4-3206-4d98-82a5-82274989d3ae-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-htmtg\" (UID: \"551f21a4-3206-4d98-82a5-82274989d3ae\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-htmtg" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195078 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr2gg\" (UniqueName: \"kubernetes.io/projected/aea3e158-90f4-4df7-9fbd-65cfdf94a813-kube-api-access-kr2gg\") pod \"dns-operator-744455d44c-rr6k4\" (UID: \"aea3e158-90f4-4df7-9fbd-65cfdf94a813\") " pod="openshift-dns-operator/dns-operator-744455d44c-rr6k4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195115 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-config\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195139 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/becf7050-f3f8-42a3-bf02-cf9347e493e6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195158 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj2lc\" (UniqueName: \"kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-kube-api-access-bj2lc\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195180 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195199 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z57fn\" (UniqueName: \"kubernetes.io/projected/01e31968-eb99-4a34-a9da-25ffd0101936-kube-api-access-z57fn\") pod \"cluster-image-registry-operator-dc59b4c8b-8g9jq\" (UID: \"01e31968-eb99-4a34-a9da-25ffd0101936\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195223 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-client-ca\") pod \"controller-manager-879f6c89f-8gb65\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195245 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/165ef6ad-2a73-4126-b694-938ecbe6cd77-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195265 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aea3e158-90f4-4df7-9fbd-65cfdf94a813-metrics-tls\") pod \"dns-operator-744455d44c-rr6k4\" (UID: \"aea3e158-90f4-4df7-9fbd-65cfdf94a813\") " pod="openshift-dns-operator/dns-operator-744455d44c-rr6k4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195285 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8s7l\" (UniqueName: \"kubernetes.io/projected/00795f3d-b1b4-494b-8898-380798319532-kube-api-access-p8s7l\") pod \"controller-manager-879f6c89f-8gb65\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195315 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195338 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m5tv\" (UniqueName: \"kubernetes.io/projected/7bbb7ab5-a68c-402c-99d8-9cd47c361ccd-kube-api-access-4m5tv\") pod \"machine-api-operator-5694c8668f-6lwpq\" (UID: \"7bbb7ab5-a68c-402c-99d8-9cd47c361ccd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195360 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/93d84de9-e75f-4127-b3ee-890375498dc3-console-serving-cert\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195382 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195416 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195437 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f12779a-be6f-4854-adc8-023f3da9c562-config\") pod \"machine-approver-56656f9798-h5bxn\" (UID: \"7f12779a-be6f-4854-adc8-023f3da9c562\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195461 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/79c41335-5b2d-4567-93ab-48948dfdd24c-etcd-service-ca\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195482 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-encryption-config\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195502 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-config\") pod \"controller-manager-879f6c89f-8gb65\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195526 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee2ba25-535c-4c4a-8ef4-f4a56f3b3484-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-j97g2\" (UID: \"aee2ba25-535c-4c4a-8ef4-f4a56f3b3484\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.195545 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.197768 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7bbb7ab5-a68c-402c-99d8-9cd47c361ccd-images\") pod \"machine-api-operator-5694c8668f-6lwpq\" (UID: \"7bbb7ab5-a68c-402c-99d8-9cd47c361ccd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.197833 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d1f4c3b-1bc7-49c0-9aba-b073020ed51f-serving-cert\") pod \"authentication-operator-69f744f599-z84xw\" (UID: \"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.197852 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d1f4c3b-1bc7-49c0-9aba-b073020ed51f-service-ca-bundle\") pod \"authentication-operator-69f744f599-z84xw\" (UID: \"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.197874 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8gb65\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.197898 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/becf7050-f3f8-42a3-bf02-cf9347e493e6-registry-certificates\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.197916 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07498167-d644-44f1-943b-fda3bd8de13d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ktks\" (UID: \"07498167-d644-44f1-943b-fda3bd8de13d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.197933 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-oauth-serving-cert\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: E1127 17:11:59.197963 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:11:59.697940291 +0000 UTC m=+142.040766649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198009 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-etcd-serving-ca\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198038 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/79c41335-5b2d-4567-93ab-48948dfdd24c-etcd-client\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198067 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/165ef6ad-2a73-4126-b694-938ecbe6cd77-etcd-client\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198091 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07498167-d644-44f1-943b-fda3bd8de13d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ktks\" (UID: \"07498167-d644-44f1-943b-fda3bd8de13d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198126 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-service-ca\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198157 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvt68\" (UniqueName: \"kubernetes.io/projected/84fc3c58-35a0-4347-b094-27f5fc4e7aae-kube-api-access-dvt68\") pod \"openshift-config-operator-7777fb866f-hdjtf\" (UID: \"84fc3c58-35a0-4347-b094-27f5fc4e7aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198186 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckxfk\" (UniqueName: \"kubernetes.io/projected/a0015215-3f91-43fc-bbee-2560bb8f4c62-kube-api-access-ckxfk\") pod \"downloads-7954f5f757-r96bc\" (UID: \"a0015215-3f91-43fc-bbee-2560bb8f4c62\") " pod="openshift-console/downloads-7954f5f757-r96bc" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198208 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39ccadf8-7389-4ffd-a72d-efd46066c233-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4hn5\" (UID: \"39ccadf8-7389-4ffd-a72d-efd46066c233\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198228 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/becf7050-f3f8-42a3-bf02-cf9347e493e6-trusted-ca\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198248 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198273 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtrtg\" (UniqueName: \"kubernetes.io/projected/7f12779a-be6f-4854-adc8-023f3da9c562-kube-api-access-jtrtg\") pod \"machine-approver-56656f9798-h5bxn\" (UID: \"7f12779a-be6f-4854-adc8-023f3da9c562\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198295 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34727a2e-1e3a-4371-9052-7df4c6693f44-config\") pod \"route-controller-manager-6576b87f9c-klkjr\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198312 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/165ef6ad-2a73-4126-b694-938ecbe6cd77-audit-dir\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198341 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/165ef6ad-2a73-4126-b694-938ecbe6cd77-serving-cert\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198363 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01e31968-eb99-4a34-a9da-25ffd0101936-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8g9jq\" (UID: \"01e31968-eb99-4a34-a9da-25ffd0101936\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198390 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f-serving-cert\") pod \"console-operator-58897d9998-g7kpw\" (UID: \"1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f\") " pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198412 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198434 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198455 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31d7df6d-769e-46b7-878c-2966896b8646-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x447d\" (UID: \"31d7df6d-769e-46b7-878c-2966896b8646\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198474 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-trusted-ca-bundle\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198493 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1f4c3b-1bc7-49c0-9aba-b073020ed51f-config\") pod \"authentication-operator-69f744f599-z84xw\" (UID: \"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198516 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7bbb7ab5-a68c-402c-99d8-9cd47c361ccd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6lwpq\" (UID: \"7bbb7ab5-a68c-402c-99d8-9cd47c361ccd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198536 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34727a2e-1e3a-4371-9052-7df4c6693f44-client-ca\") pod \"route-controller-manager-6576b87f9c-klkjr\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198554 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00795f3d-b1b4-494b-8898-380798319532-serving-cert\") pod \"controller-manager-879f6c89f-8gb65\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198575 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31d7df6d-769e-46b7-878c-2966896b8646-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x447d\" (UID: \"31d7df6d-769e-46b7-878c-2966896b8646\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198595 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d1f4c3b-1bc7-49c0-9aba-b073020ed51f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-z84xw\" (UID: \"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198615 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt2dl\" (UniqueName: \"kubernetes.io/projected/8412b381-cbf1-4f9c-8e93-6991812b725d-kube-api-access-wt2dl\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198666 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-bound-sa-token\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198686 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/93d84de9-e75f-4127-b3ee-890375498dc3-console-oauth-config\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198780 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-image-import-ca\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198808 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpksg\" (UniqueName: \"kubernetes.io/projected/1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f-kube-api-access-fpksg\") pod \"console-operator-58897d9998-g7kpw\" (UID: \"1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f\") " pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198828 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/79c41335-5b2d-4567-93ab-48948dfdd24c-etcd-ca\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198848 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198870 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198894 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq47b\" (UniqueName: \"kubernetes.io/projected/93d84de9-e75f-4127-b3ee-890375498dc3-kube-api-access-dq47b\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198919 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8412b381-cbf1-4f9c-8e93-6991812b725d-audit-dir\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198939 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sggcf\" (UniqueName: \"kubernetes.io/projected/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-kube-api-access-sggcf\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198958 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84fc3c58-35a0-4347-b094-27f5fc4e7aae-serving-cert\") pod \"openshift-config-operator-7777fb866f-hdjtf\" (UID: \"84fc3c58-35a0-4347-b094-27f5fc4e7aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.198980 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls52n\" (UniqueName: \"kubernetes.io/projected/aee2ba25-535c-4c4a-8ef4-f4a56f3b3484-kube-api-access-ls52n\") pod \"openshift-controller-manager-operator-756b6f6bc6-j97g2\" (UID: \"aee2ba25-535c-4c4a-8ef4-f4a56f3b3484\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199003 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/becf7050-f3f8-42a3-bf02-cf9347e493e6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199021 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f-config\") pod \"console-operator-58897d9998-g7kpw\" (UID: \"1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f\") " pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199042 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-serving-cert\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199067 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-etcd-client\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199087 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/165ef6ad-2a73-4126-b694-938ecbe6cd77-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199110 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm64t\" (UniqueName: \"kubernetes.io/projected/79c41335-5b2d-4567-93ab-48948dfdd24c-kube-api-access-pm64t\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199130 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-registry-tls\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199150 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvnsx\" (UniqueName: \"kubernetes.io/projected/1d1f4c3b-1bc7-49c0-9aba-b073020ed51f-kube-api-access-jvnsx\") pod \"authentication-operator-69f744f599-z84xw\" (UID: \"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199172 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79c41335-5b2d-4567-93ab-48948dfdd24c-serving-cert\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199190 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-audit-policies\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199209 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5r4q\" (UniqueName: \"kubernetes.io/projected/34727a2e-1e3a-4371-9052-7df4c6693f44-kube-api-access-d5r4q\") pod \"route-controller-manager-6576b87f9c-klkjr\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199230 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199254 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199274 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/84fc3c58-35a0-4347-b094-27f5fc4e7aae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hdjtf\" (UID: \"84fc3c58-35a0-4347-b094-27f5fc4e7aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199296 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4t29\" (UniqueName: \"kubernetes.io/projected/31d7df6d-769e-46b7-878c-2966896b8646-kube-api-access-h4t29\") pod \"openshift-apiserver-operator-796bbdcf4f-x447d\" (UID: \"31d7df6d-769e-46b7-878c-2966896b8646\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199313 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xfqw\" (UniqueName: \"kubernetes.io/projected/551f21a4-3206-4d98-82a5-82274989d3ae-kube-api-access-6xfqw\") pod \"cluster-samples-operator-665b6dd947-htmtg\" (UID: \"551f21a4-3206-4d98-82a5-82274989d3ae\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-htmtg" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199329 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/165ef6ad-2a73-4126-b694-938ecbe6cd77-audit-policies\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.199343 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/165ef6ad-2a73-4126-b694-938ecbe6cd77-encryption-config\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.300354 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.300598 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/165ef6ad-2a73-4126-b694-938ecbe6cd77-etcd-client\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: E1127 17:11:59.300625 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:11:59.800596656 +0000 UTC m=+142.143423014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.300756 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7418e8a9-d007-44c8-9969-0097ab135a74-signing-key\") pod \"service-ca-9c57cc56f-7b9r2\" (UID: \"7418e8a9-d007-44c8-9969-0097ab135a74\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9r2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.300829 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd629fc6-afbd-4fba-ad8a-af3aa86487b3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hxdbd\" (UID: \"bd629fc6-afbd-4fba-ad8a-af3aa86487b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.300882 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-service-ca\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.300920 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvt68\" (UniqueName: \"kubernetes.io/projected/84fc3c58-35a0-4347-b094-27f5fc4e7aae-kube-api-access-dvt68\") pod \"openshift-config-operator-7777fb866f-hdjtf\" (UID: \"84fc3c58-35a0-4347-b094-27f5fc4e7aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.300954 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/228abb37-ff66-48b3-a882-d67ca901a322-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7psgx\" (UID: \"228abb37-ff66-48b3-a882-d67ca901a322\") " pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.300988 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7418e8a9-d007-44c8-9969-0097ab135a74-signing-cabundle\") pod \"service-ca-9c57cc56f-7b9r2\" (UID: \"7418e8a9-d007-44c8-9969-0097ab135a74\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9r2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301025 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckxfk\" (UniqueName: \"kubernetes.io/projected/a0015215-3f91-43fc-bbee-2560bb8f4c62-kube-api-access-ckxfk\") pod \"downloads-7954f5f757-r96bc\" (UID: \"a0015215-3f91-43fc-bbee-2560bb8f4c62\") " pod="openshift-console/downloads-7954f5f757-r96bc" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301061 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtrtg\" (UniqueName: \"kubernetes.io/projected/7f12779a-be6f-4854-adc8-023f3da9c562-kube-api-access-jtrtg\") pod \"machine-approver-56656f9798-h5bxn\" (UID: \"7f12779a-be6f-4854-adc8-023f3da9c562\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301094 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34727a2e-1e3a-4371-9052-7df4c6693f44-config\") pod \"route-controller-manager-6576b87f9c-klkjr\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301133 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/165ef6ad-2a73-4126-b694-938ecbe6cd77-audit-dir\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301183 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q9gw\" (UniqueName: \"kubernetes.io/projected/cd1802c8-be73-40c4-b495-54eede995a32-kube-api-access-5q9gw\") pod \"machine-config-controller-84d6567774-t8xxz\" (UID: \"cd1802c8-be73-40c4-b495-54eede995a32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301243 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db76n\" (UniqueName: \"kubernetes.io/projected/5d775ccc-6b5a-4d05-8138-4521d5fb8adc-kube-api-access-db76n\") pod \"service-ca-operator-777779d784-tswzz\" (UID: \"5d775ccc-6b5a-4d05-8138-4521d5fb8adc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tswzz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301296 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7vmq\" (UniqueName: \"kubernetes.io/projected/0436d85a-59ed-46ee-b01c-d82423c932b0-kube-api-access-g7vmq\") pod \"dns-default-4855s\" (UID: \"0436d85a-59ed-46ee-b01c-d82423c932b0\") " pod="openshift-dns/dns-default-4855s" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301350 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f-serving-cert\") pod \"console-operator-58897d9998-g7kpw\" (UID: \"1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f\") " pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301405 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301460 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01e31968-eb99-4a34-a9da-25ffd0101936-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8g9jq\" (UID: \"01e31968-eb99-4a34-a9da-25ffd0101936\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301518 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1a95901-32c9-465a-a8b6-e44c289beb03-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fw9ts\" (UID: \"d1a95901-32c9-465a-a8b6-e44c289beb03\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301581 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7bbb7ab5-a68c-402c-99d8-9cd47c361ccd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6lwpq\" (UID: \"7bbb7ab5-a68c-402c-99d8-9cd47c361ccd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301634 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/98b6db89-c5c7-4ec3-90dd-013390b75f20-mountpoint-dir\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301723 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d775ccc-6b5a-4d05-8138-4521d5fb8adc-serving-cert\") pod \"service-ca-operator-777779d784-tswzz\" (UID: \"5d775ccc-6b5a-4d05-8138-4521d5fb8adc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tswzz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301777 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1143d96-b3c1-4892-8f6b-3672cab07e9c-config\") pod \"kube-apiserver-operator-766d6c64bb-ltcfx\" (UID: \"b1143d96-b3c1-4892-8f6b-3672cab07e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301830 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d1f4c3b-1bc7-49c0-9aba-b073020ed51f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-z84xw\" (UID: \"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301926 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt2dl\" (UniqueName: \"kubernetes.io/projected/8412b381-cbf1-4f9c-8e93-6991812b725d-kube-api-access-wt2dl\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.301977 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0436d85a-59ed-46ee-b01c-d82423c932b0-config-volume\") pod \"dns-default-4855s\" (UID: \"0436d85a-59ed-46ee-b01c-d82423c932b0\") " pod="openshift-dns/dns-default-4855s" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.302028 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc302df1-2d39-41a5-be37-eaccc5cde214-webhook-cert\") pod \"packageserver-d55dfcdfc-w7lbb\" (UID: \"bc302df1-2d39-41a5-be37-eaccc5cde214\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.302068 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-service-ca\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.302078 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/31dcae62-0572-4873-b054-06b731401d8e-cert\") pod \"ingress-canary-wx7m6\" (UID: \"31dcae62-0572-4873-b054-06b731401d8e\") " pod="openshift-ingress-canary/ingress-canary-wx7m6" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.302167 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpksg\" (UniqueName: \"kubernetes.io/projected/1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f-kube-api-access-fpksg\") pod \"console-operator-58897d9998-g7kpw\" (UID: \"1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f\") " pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.302470 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/165ef6ad-2a73-4126-b694-938ecbe6cd77-audit-dir\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.302634 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/79c41335-5b2d-4567-93ab-48948dfdd24c-etcd-ca\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.302812 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.302984 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spfnq\" (UniqueName: \"kubernetes.io/projected/7418e8a9-d007-44c8-9969-0097ab135a74-kube-api-access-spfnq\") pod \"service-ca-9c57cc56f-7b9r2\" (UID: \"7418e8a9-d007-44c8-9969-0097ab135a74\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9r2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303109 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdww2\" (UniqueName: \"kubernetes.io/projected/8ac750b0-4cbd-474b-a99c-1ddafb554107-kube-api-access-bdww2\") pod \"olm-operator-6b444d44fb-bt9c5\" (UID: \"8ac750b0-4cbd-474b-a99c-1ddafb554107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303163 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8412b381-cbf1-4f9c-8e93-6991812b725d-audit-dir\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303197 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303230 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/becf7050-f3f8-42a3-bf02-cf9347e493e6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303262 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sggcf\" (UniqueName: \"kubernetes.io/projected/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-kube-api-access-sggcf\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303292 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84fc3c58-35a0-4347-b094-27f5fc4e7aae-serving-cert\") pod \"openshift-config-operator-7777fb866f-hdjtf\" (UID: \"84fc3c58-35a0-4347-b094-27f5fc4e7aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303324 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls52n\" (UniqueName: \"kubernetes.io/projected/aee2ba25-535c-4c4a-8ef4-f4a56f3b3484-kube-api-access-ls52n\") pod \"openshift-controller-manager-operator-756b6f6bc6-j97g2\" (UID: \"aee2ba25-535c-4c4a-8ef4-f4a56f3b3484\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303358 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/101bdb1c-75a1-4d92-90e9-360cece56c1e-service-ca-bundle\") pod \"router-default-5444994796-qgvjw\" (UID: \"101bdb1c-75a1-4d92-90e9-360cece56c1e\") " pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303389 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-etcd-client\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303420 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/98b6db89-c5c7-4ec3-90dd-013390b75f20-socket-dir\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303452 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bc302df1-2d39-41a5-be37-eaccc5cde214-tmpfs\") pod \"packageserver-d55dfcdfc-w7lbb\" (UID: \"bc302df1-2d39-41a5-be37-eaccc5cde214\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303485 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm64t\" (UniqueName: \"kubernetes.io/projected/79c41335-5b2d-4567-93ab-48948dfdd24c-kube-api-access-pm64t\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303514 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/94e411a4-eb23-4669-ba67-26f4e6cc6605-certs\") pod \"machine-config-server-t47sj\" (UID: \"94e411a4-eb23-4669-ba67-26f4e6cc6605\") " pod="openshift-machine-config-operator/machine-config-server-t47sj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303526 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8412b381-cbf1-4f9c-8e93-6991812b725d-audit-dir\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303543 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-registry-tls\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303575 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvnsx\" (UniqueName: \"kubernetes.io/projected/1d1f4c3b-1bc7-49c0-9aba-b073020ed51f-kube-api-access-jvnsx\") pod \"authentication-operator-69f744f599-z84xw\" (UID: \"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303605 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4t29\" (UniqueName: \"kubernetes.io/projected/31d7df6d-769e-46b7-878c-2966896b8646-kube-api-access-h4t29\") pod \"openshift-apiserver-operator-796bbdcf4f-x447d\" (UID: \"31d7df6d-769e-46b7-878c-2966896b8646\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303710 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xfqw\" (UniqueName: \"kubernetes.io/projected/551f21a4-3206-4d98-82a5-82274989d3ae-kube-api-access-6xfqw\") pod \"cluster-samples-operator-665b6dd947-htmtg\" (UID: \"551f21a4-3206-4d98-82a5-82274989d3ae\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-htmtg" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303746 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303778 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/84fc3c58-35a0-4347-b094-27f5fc4e7aae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hdjtf\" (UID: \"84fc3c58-35a0-4347-b094-27f5fc4e7aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303812 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/165ef6ad-2a73-4126-b694-938ecbe6cd77-audit-policies\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303841 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/165ef6ad-2a73-4126-b694-938ecbe6cd77-encryption-config\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303873 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5542z\" (UniqueName: \"kubernetes.io/projected/606bdb55-1c87-4057-bd78-91a9769dcd1c-kube-api-access-5542z\") pod \"migrator-59844c95c7-8hf2h\" (UID: \"606bdb55-1c87-4057-bd78-91a9769dcd1c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303931 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-audit-dir\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.303977 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/228abb37-ff66-48b3-a882-d67ca901a322-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7psgx\" (UID: \"228abb37-ff66-48b3-a882-d67ca901a322\") " pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304010 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc302df1-2d39-41a5-be37-eaccc5cde214-apiservice-cert\") pod \"packageserver-d55dfcdfc-w7lbb\" (UID: \"bc302df1-2d39-41a5-be37-eaccc5cde214\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304047 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7f12779a-be6f-4854-adc8-023f3da9c562-machine-approver-tls\") pod \"machine-approver-56656f9798-h5bxn\" (UID: \"7f12779a-be6f-4854-adc8-023f3da9c562\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304063 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d1f4c3b-1bc7-49c0-9aba-b073020ed51f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-z84xw\" (UID: \"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304077 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f-trusted-ca\") pod \"console-operator-58897d9998-g7kpw\" (UID: \"1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f\") " pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304187 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd1802c8-be73-40c4-b495-54eede995a32-proxy-tls\") pod \"machine-config-controller-84d6567774-t8xxz\" (UID: \"cd1802c8-be73-40c4-b495-54eede995a32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304204 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/79c41335-5b2d-4567-93ab-48948dfdd24c-etcd-ca\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304233 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aee2ba25-535c-4c4a-8ef4-f4a56f3b3484-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-j97g2\" (UID: \"aee2ba25-535c-4c4a-8ef4-f4a56f3b3484\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304342 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-447sk\" (UniqueName: \"kubernetes.io/projected/98b6db89-c5c7-4ec3-90dd-013390b75f20-kube-api-access-447sk\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304404 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j96c\" (UniqueName: \"kubernetes.io/projected/ecf50e14-57e0-49d5-8581-6842527b63bc-kube-api-access-5j96c\") pod \"machine-config-operator-74547568cd-nwp5c\" (UID: \"ecf50e14-57e0-49d5-8581-6842527b63bc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304467 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj2lc\" (UniqueName: \"kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-kube-api-access-bj2lc\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304525 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304576 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-config\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304629 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/003746e3-d80d-40d7-aac1-03bec863a85d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xk5xs\" (UID: \"003746e3-d80d-40d7-aac1-03bec863a85d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304747 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh89w\" (UniqueName: \"kubernetes.io/projected/101bdb1c-75a1-4d92-90e9-360cece56c1e-kube-api-access-kh89w\") pod \"router-default-5444994796-qgvjw\" (UID: \"101bdb1c-75a1-4d92-90e9-360cece56c1e\") " pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304798 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ecf50e14-57e0-49d5-8581-6842527b63bc-images\") pod \"machine-config-operator-74547568cd-nwp5c\" (UID: \"ecf50e14-57e0-49d5-8581-6842527b63bc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304859 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z57fn\" (UniqueName: \"kubernetes.io/projected/01e31968-eb99-4a34-a9da-25ffd0101936-kube-api-access-z57fn\") pod \"cluster-image-registry-operator-dc59b4c8b-8g9jq\" (UID: \"01e31968-eb99-4a34-a9da-25ffd0101936\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304915 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aea3e158-90f4-4df7-9fbd-65cfdf94a813-metrics-tls\") pod \"dns-operator-744455d44c-rr6k4\" (UID: \"aea3e158-90f4-4df7-9fbd-65cfdf94a813\") " pod="openshift-dns-operator/dns-operator-744455d44c-rr6k4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.304966 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98b6db89-c5c7-4ec3-90dd-013390b75f20-registration-dir\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305016 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07e202a3-1f47-409c-83f2-a066ddb1ffe2-secret-volume\") pod \"collect-profiles-29404380-47pdq\" (UID: \"07e202a3-1f47-409c-83f2-a066ddb1ffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305074 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m5tv\" (UniqueName: \"kubernetes.io/projected/7bbb7ab5-a68c-402c-99d8-9cd47c361ccd-kube-api-access-4m5tv\") pod \"machine-api-operator-5694c8668f-6lwpq\" (UID: \"7bbb7ab5-a68c-402c-99d8-9cd47c361ccd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305127 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/93d84de9-e75f-4127-b3ee-890375498dc3-console-serving-cert\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305175 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305224 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/98b6db89-c5c7-4ec3-90dd-013390b75f20-csi-data-dir\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305279 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/79c41335-5b2d-4567-93ab-48948dfdd24c-etcd-service-ca\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305334 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-config\") pod \"controller-manager-879f6c89f-8gb65\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305384 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnfl2\" (UniqueName: \"kubernetes.io/projected/e0f375d2-c00a-49a3-963d-5d2bb71fa625-kube-api-access-jnfl2\") pod \"control-plane-machine-set-operator-78cbb6b69f-cx4g9\" (UID: \"e0f375d2-c00a-49a3-963d-5d2bb71fa625\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cx4g9" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305418 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f-trusted-ca\") pod \"console-operator-58897d9998-g7kpw\" (UID: \"1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f\") " pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305427 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34727a2e-1e3a-4371-9052-7df4c6693f44-config\") pod \"route-controller-manager-6576b87f9c-klkjr\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305468 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d1f4c3b-1bc7-49c0-9aba-b073020ed51f-service-ca-bundle\") pod \"authentication-operator-69f744f599-z84xw\" (UID: \"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305524 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/101bdb1c-75a1-4d92-90e9-360cece56c1e-stats-auth\") pod \"router-default-5444994796-qgvjw\" (UID: \"101bdb1c-75a1-4d92-90e9-360cece56c1e\") " pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305581 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/becf7050-f3f8-42a3-bf02-cf9347e493e6-registry-certificates\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305635 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07498167-d644-44f1-943b-fda3bd8de13d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ktks\" (UID: \"07498167-d644-44f1-943b-fda3bd8de13d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305761 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-oauth-serving-cert\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305813 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-etcd-serving-ca\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305867 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8gb65\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305923 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/79c41335-5b2d-4567-93ab-48948dfdd24c-etcd-client\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.305980 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07498167-d644-44f1-943b-fda3bd8de13d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ktks\" (UID: \"07498167-d644-44f1-943b-fda3bd8de13d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306040 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd1802c8-be73-40c4-b495-54eede995a32-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-t8xxz\" (UID: \"cd1802c8-be73-40c4-b495-54eede995a32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306091 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ecf50e14-57e0-49d5-8581-6842527b63bc-proxy-tls\") pod \"machine-config-operator-74547568cd-nwp5c\" (UID: \"ecf50e14-57e0-49d5-8581-6842527b63bc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306138 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1143d96-b3c1-4892-8f6b-3672cab07e9c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ltcfx\" (UID: \"b1143d96-b3c1-4892-8f6b-3672cab07e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306191 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39ccadf8-7389-4ffd-a72d-efd46066c233-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4hn5\" (UID: \"39ccadf8-7389-4ffd-a72d-efd46066c233\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306238 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d1a95901-32c9-465a-a8b6-e44c289beb03-metrics-tls\") pod \"ingress-operator-5b745b69d9-fw9ts\" (UID: \"d1a95901-32c9-465a-a8b6-e44c289beb03\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306300 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/becf7050-f3f8-42a3-bf02-cf9347e493e6-trusted-ca\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306353 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306415 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqjcj\" (UniqueName: \"kubernetes.io/projected/fbf0570b-233b-4046-8d07-e164c66cf429-kube-api-access-wqjcj\") pod \"multus-admission-controller-857f4d67dd-5jcbf\" (UID: \"fbf0570b-233b-4046-8d07-e164c66cf429\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5jcbf" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306472 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306528 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/165ef6ad-2a73-4126-b694-938ecbe6cd77-serving-cert\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306583 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqf2f\" (UniqueName: \"kubernetes.io/projected/d1a95901-32c9-465a-a8b6-e44c289beb03-kube-api-access-tqf2f\") pod \"ingress-operator-5b745b69d9-fw9ts\" (UID: \"d1a95901-32c9-465a-a8b6-e44c289beb03\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306639 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31d7df6d-769e-46b7-878c-2966896b8646-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x447d\" (UID: \"31d7df6d-769e-46b7-878c-2966896b8646\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306731 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-trusted-ca-bundle\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306787 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1f4c3b-1bc7-49c0-9aba-b073020ed51f-config\") pod \"authentication-operator-69f744f599-z84xw\" (UID: \"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306845 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9dpn\" (UniqueName: \"kubernetes.io/projected/bd629fc6-afbd-4fba-ad8a-af3aa86487b3-kube-api-access-k9dpn\") pod \"package-server-manager-789f6589d5-hxdbd\" (UID: \"bd629fc6-afbd-4fba-ad8a-af3aa86487b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306898 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jpf6\" (UniqueName: \"kubernetes.io/projected/bc302df1-2d39-41a5-be37-eaccc5cde214-kube-api-access-6jpf6\") pod \"packageserver-d55dfcdfc-w7lbb\" (UID: \"bc302df1-2d39-41a5-be37-eaccc5cde214\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.306959 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31d7df6d-769e-46b7-878c-2966896b8646-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x447d\" (UID: \"31d7df6d-769e-46b7-878c-2966896b8646\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.307017 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34727a2e-1e3a-4371-9052-7df4c6693f44-client-ca\") pod \"route-controller-manager-6576b87f9c-klkjr\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.307070 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00795f3d-b1b4-494b-8898-380798319532-serving-cert\") pod \"controller-manager-879f6c89f-8gb65\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.307125 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l8vc\" (UniqueName: \"kubernetes.io/projected/0a712c8a-b3db-40bd-9e6f-cd23b095e2a4-kube-api-access-7l8vc\") pod \"catalog-operator-68c6474976-8dhs9\" (UID: \"0a712c8a-b3db-40bd-9e6f-cd23b095e2a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.307174 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07e202a3-1f47-409c-83f2-a066ddb1ffe2-config-volume\") pod \"collect-profiles-29404380-47pdq\" (UID: \"07e202a3-1f47-409c-83f2-a066ddb1ffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.307465 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-bound-sa-token\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.307591 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/93d84de9-e75f-4127-b3ee-890375498dc3-console-oauth-config\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.307725 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/94e411a4-eb23-4669-ba67-26f4e6cc6605-node-bootstrap-token\") pod \"machine-config-server-t47sj\" (UID: \"94e411a4-eb23-4669-ba67-26f4e6cc6605\") " pod="openshift-machine-config-operator/machine-config-server-t47sj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.307891 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/003746e3-d80d-40d7-aac1-03bec863a85d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xk5xs\" (UID: \"003746e3-d80d-40d7-aac1-03bec863a85d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.308030 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-audit-dir\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.308041 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-image-import-ca\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.308248 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq47b\" (UniqueName: \"kubernetes.io/projected/93d84de9-e75f-4127-b3ee-890375498dc3-kube-api-access-dq47b\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.308402 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/101bdb1c-75a1-4d92-90e9-360cece56c1e-metrics-certs\") pod \"router-default-5444994796-qgvjw\" (UID: \"101bdb1c-75a1-4d92-90e9-360cece56c1e\") " pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.308522 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f-config\") pod \"console-operator-58897d9998-g7kpw\" (UID: \"1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f\") " pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.308634 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-serving-cert\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.308817 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fbf0570b-233b-4046-8d07-e164c66cf429-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5jcbf\" (UID: \"fbf0570b-233b-4046-8d07-e164c66cf429\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5jcbf" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.308940 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1a95901-32c9-465a-a8b6-e44c289beb03-trusted-ca\") pod \"ingress-operator-5b745b69d9-fw9ts\" (UID: \"d1a95901-32c9-465a-a8b6-e44c289beb03\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.309070 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/165ef6ad-2a73-4126-b694-938ecbe6cd77-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.309260 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79c41335-5b2d-4567-93ab-48948dfdd24c-serving-cert\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.309340 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aee2ba25-535c-4c4a-8ef4-f4a56f3b3484-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-j97g2\" (UID: \"aee2ba25-535c-4c4a-8ef4-f4a56f3b3484\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.309388 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-audit-policies\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.309544 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5r4q\" (UniqueName: \"kubernetes.io/projected/34727a2e-1e3a-4371-9052-7df4c6693f44-kube-api-access-d5r4q\") pod \"route-controller-manager-6576b87f9c-klkjr\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.310509 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.310521 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.312314 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/84fc3c58-35a0-4347-b094-27f5fc4e7aae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hdjtf\" (UID: \"84fc3c58-35a0-4347-b094-27f5fc4e7aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.313855 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.313954 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.315077 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-config\") pod \"controller-manager-879f6c89f-8gb65\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.315240 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31d7df6d-769e-46b7-878c-2966896b8646-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x447d\" (UID: \"31d7df6d-769e-46b7-878c-2966896b8646\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.315728 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39ccadf8-7389-4ffd-a72d-efd46066c233-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4hn5\" (UID: \"39ccadf8-7389-4ffd-a72d-efd46066c233\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.316193 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f-config\") pod \"console-operator-58897d9998-g7kpw\" (UID: \"1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f\") " pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.316289 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/165ef6ad-2a73-4126-b694-938ecbe6cd77-audit-policies\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.316994 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.317103 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8gb65\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.317249 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-etcd-client\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.317429 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-config\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.317443 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/98b6db89-c5c7-4ec3-90dd-013390b75f20-plugins-dir\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.317549 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f-serving-cert\") pod \"console-operator-58897d9998-g7kpw\" (UID: \"1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f\") " pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.317573 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-etcd-serving-ca\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.317680 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-oauth-serving-cert\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318036 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84fc3c58-35a0-4347-b094-27f5fc4e7aae-serving-cert\") pod \"openshift-config-operator-7777fb866f-hdjtf\" (UID: \"84fc3c58-35a0-4347-b094-27f5fc4e7aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318089 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318173 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/003746e3-d80d-40d7-aac1-03bec863a85d-config\") pod \"kube-controller-manager-operator-78b949d7b-xk5xs\" (UID: \"003746e3-d80d-40d7-aac1-03bec863a85d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318214 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d775ccc-6b5a-4d05-8138-4521d5fb8adc-config\") pod \"service-ca-operator-777779d784-tswzz\" (UID: \"5d775ccc-6b5a-4d05-8138-4521d5fb8adc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tswzz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318251 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/165ef6ad-2a73-4126-b694-938ecbe6cd77-encryption-config\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318293 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm5bp\" (UniqueName: \"kubernetes.io/projected/07e202a3-1f47-409c-83f2-a066ddb1ffe2-kube-api-access-tm5bp\") pod \"collect-profiles-29404380-47pdq\" (UID: \"07e202a3-1f47-409c-83f2-a066ddb1ffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318337 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7f12779a-be6f-4854-adc8-023f3da9c562-auth-proxy-config\") pod \"machine-approver-56656f9798-h5bxn\" (UID: \"7f12779a-be6f-4854-adc8-023f3da9c562\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318389 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-console-config\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318423 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bbb7ab5-a68c-402c-99d8-9cd47c361ccd-config\") pod \"machine-api-operator-5694c8668f-6lwpq\" (UID: \"7bbb7ab5-a68c-402c-99d8-9cd47c361ccd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318455 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-audit\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318490 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvwqm\" (UniqueName: \"kubernetes.io/projected/165ef6ad-2a73-4126-b694-938ecbe6cd77-kube-api-access-bvwqm\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318527 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlx75\" (UniqueName: \"kubernetes.io/projected/94e411a4-eb23-4669-ba67-26f4e6cc6605-kube-api-access-mlx75\") pod \"machine-config-server-t47sj\" (UID: \"94e411a4-eb23-4669-ba67-26f4e6cc6605\") " pod="openshift-machine-config-operator/machine-config-server-t47sj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318265 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7f12779a-be6f-4854-adc8-023f3da9c562-machine-approver-tls\") pod \"machine-approver-56656f9798-h5bxn\" (UID: \"7f12779a-be6f-4854-adc8-023f3da9c562\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318785 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0a712c8a-b3db-40bd-9e6f-cd23b095e2a4-srv-cert\") pod \"catalog-operator-68c6474976-8dhs9\" (UID: \"0a712c8a-b3db-40bd-9e6f-cd23b095e2a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318819 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/165ef6ad-2a73-4126-b694-938ecbe6cd77-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318863 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07498167-d644-44f1-943b-fda3bd8de13d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ktks\" (UID: \"07498167-d644-44f1-943b-fda3bd8de13d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318900 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79c41335-5b2d-4567-93ab-48948dfdd24c-config\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.318975 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kkdd\" (UniqueName: \"kubernetes.io/projected/39ccadf8-7389-4ffd-a72d-efd46066c233-kube-api-access-8kkdd\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4hn5\" (UID: \"39ccadf8-7389-4ffd-a72d-efd46066c233\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.319052 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/101bdb1c-75a1-4d92-90e9-360cece56c1e-default-certificate\") pod \"router-default-5444994796-qgvjw\" (UID: \"101bdb1c-75a1-4d92-90e9-360cece56c1e\") " pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.319124 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-node-pullsecrets\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.319163 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34727a2e-1e3a-4371-9052-7df4c6693f44-serving-cert\") pod \"route-controller-manager-6576b87f9c-klkjr\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.319596 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/01e31968-eb99-4a34-a9da-25ffd0101936-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8g9jq\" (UID: \"01e31968-eb99-4a34-a9da-25ffd0101936\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.319668 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.319701 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01e31968-eb99-4a34-a9da-25ffd0101936-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8g9jq\" (UID: \"01e31968-eb99-4a34-a9da-25ffd0101936\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.319740 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39ccadf8-7389-4ffd-a72d-efd46066c233-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4hn5\" (UID: \"39ccadf8-7389-4ffd-a72d-efd46066c233\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.319773 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/551f21a4-3206-4d98-82a5-82274989d3ae-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-htmtg\" (UID: \"551f21a4-3206-4d98-82a5-82274989d3ae\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-htmtg" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.319804 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr2gg\" (UniqueName: \"kubernetes.io/projected/aea3e158-90f4-4df7-9fbd-65cfdf94a813-kube-api-access-kr2gg\") pod \"dns-operator-744455d44c-rr6k4\" (UID: \"aea3e158-90f4-4df7-9fbd-65cfdf94a813\") " pod="openshift-dns-operator/dns-operator-744455d44c-rr6k4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.319837 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1143d96-b3c1-4892-8f6b-3672cab07e9c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ltcfx\" (UID: \"b1143d96-b3c1-4892-8f6b-3672cab07e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.319871 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/becf7050-f3f8-42a3-bf02-cf9347e493e6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.319881 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.319902 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ac750b0-4cbd-474b-a99c-1ddafb554107-srv-cert\") pod \"olm-operator-6b444d44fb-bt9c5\" (UID: \"8ac750b0-4cbd-474b-a99c-1ddafb554107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.320540 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34727a2e-1e3a-4371-9052-7df4c6693f44-client-ca\") pod \"route-controller-manager-6576b87f9c-klkjr\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.321198 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07498167-d644-44f1-943b-fda3bd8de13d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ktks\" (UID: \"07498167-d644-44f1-943b-fda3bd8de13d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.321562 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-registry-tls\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.321561 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/165ef6ad-2a73-4126-b694-938ecbe6cd77-etcd-client\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.321929 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-image-import-ca\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.322241 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.322613 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/becf7050-f3f8-42a3-bf02-cf9347e493e6-trusted-ca\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.323155 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.323510 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/93d84de9-e75f-4127-b3ee-890375498dc3-console-serving-cert\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.323604 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1f4c3b-1bc7-49c0-9aba-b073020ed51f-config\") pod \"authentication-operator-69f744f599-z84xw\" (UID: \"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324054 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-client-ca\") pod \"controller-manager-879f6c89f-8gb65\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324150 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/79c41335-5b2d-4567-93ab-48948dfdd24c-etcd-service-ca\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324167 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/165ef6ad-2a73-4126-b694-938ecbe6cd77-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324196 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d1f4c3b-1bc7-49c0-9aba-b073020ed51f-service-ca-bundle\") pod \"authentication-operator-69f744f599-z84xw\" (UID: \"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324234 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs2rd\" (UniqueName: \"kubernetes.io/projected/31dcae62-0572-4873-b054-06b731401d8e-kube-api-access-hs2rd\") pod \"ingress-canary-wx7m6\" (UID: \"31dcae62-0572-4873-b054-06b731401d8e\") " pod="openshift-ingress-canary/ingress-canary-wx7m6" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324347 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324481 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8s7l\" (UniqueName: \"kubernetes.io/projected/00795f3d-b1b4-494b-8898-380798319532-kube-api-access-p8s7l\") pod \"controller-manager-879f6c89f-8gb65\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324531 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0a712c8a-b3db-40bd-9e6f-cd23b095e2a4-profile-collector-cert\") pod \"catalog-operator-68c6474976-8dhs9\" (UID: \"0a712c8a-b3db-40bd-9e6f-cd23b095e2a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324571 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsgc8\" (UniqueName: \"kubernetes.io/projected/228abb37-ff66-48b3-a882-d67ca901a322-kube-api-access-dsgc8\") pod \"marketplace-operator-79b997595-7psgx\" (UID: \"228abb37-ff66-48b3-a882-d67ca901a322\") " pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324614 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ecf50e14-57e0-49d5-8581-6842527b63bc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nwp5c\" (UID: \"ecf50e14-57e0-49d5-8581-6842527b63bc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324625 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-audit-policies\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324704 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324767 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f12779a-be6f-4854-adc8-023f3da9c562-config\") pod \"machine-approver-56656f9798-h5bxn\" (UID: \"7f12779a-be6f-4854-adc8-023f3da9c562\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324830 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-encryption-config\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324868 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee2ba25-535c-4c4a-8ef4-f4a56f3b3484-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-j97g2\" (UID: \"aee2ba25-535c-4c4a-8ef4-f4a56f3b3484\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324930 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d1f4c3b-1bc7-49c0-9aba-b073020ed51f-serving-cert\") pod \"authentication-operator-69f744f599-z84xw\" (UID: \"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324981 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.325019 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7bbb7ab5-a68c-402c-99d8-9cd47c361ccd-images\") pod \"machine-api-operator-5694c8668f-6lwpq\" (UID: \"7bbb7ab5-a68c-402c-99d8-9cd47c361ccd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.325047 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-trusted-ca-bundle\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.325061 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0f375d2-c00a-49a3-963d-5d2bb71fa625-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cx4g9\" (UID: \"e0f375d2-c00a-49a3-963d-5d2bb71fa625\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cx4g9" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.324932 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7bbb7ab5-a68c-402c-99d8-9cd47c361ccd-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6lwpq\" (UID: \"7bbb7ab5-a68c-402c-99d8-9cd47c361ccd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.325175 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bbb7ab5-a68c-402c-99d8-9cd47c361ccd-config\") pod \"machine-api-operator-5694c8668f-6lwpq\" (UID: \"7bbb7ab5-a68c-402c-99d8-9cd47c361ccd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.325361 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7f12779a-be6f-4854-adc8-023f3da9c562-auth-proxy-config\") pod \"machine-approver-56656f9798-h5bxn\" (UID: \"7f12779a-be6f-4854-adc8-023f3da9c562\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.325428 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79c41335-5b2d-4567-93ab-48948dfdd24c-config\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.326388 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-audit\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.326447 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00795f3d-b1b4-494b-8898-380798319532-serving-cert\") pod \"controller-manager-879f6c89f-8gb65\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.326530 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-client-ca\") pod \"controller-manager-879f6c89f-8gb65\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.326601 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0436d85a-59ed-46ee-b01c-d82423c932b0-metrics-tls\") pod \"dns-default-4855s\" (UID: \"0436d85a-59ed-46ee-b01c-d82423c932b0\") " pod="openshift-dns/dns-default-4855s" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.326637 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ac750b0-4cbd-474b-a99c-1ddafb554107-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bt9c5\" (UID: \"8ac750b0-4cbd-474b-a99c-1ddafb554107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.326924 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7bbb7ab5-a68c-402c-99d8-9cd47c361ccd-images\") pod \"machine-api-operator-5694c8668f-6lwpq\" (UID: \"7bbb7ab5-a68c-402c-99d8-9cd47c361ccd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.327032 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/becf7050-f3f8-42a3-bf02-cf9347e493e6-registry-certificates\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.327229 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/165ef6ad-2a73-4126-b694-938ecbe6cd77-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.327305 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-node-pullsecrets\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.327500 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/becf7050-f3f8-42a3-bf02-cf9347e493e6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.328065 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f12779a-be6f-4854-adc8-023f3da9c562-config\") pod \"machine-approver-56656f9798-h5bxn\" (UID: \"7f12779a-be6f-4854-adc8-023f3da9c562\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.328567 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-console-config\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.329284 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.329443 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee2ba25-535c-4c4a-8ef4-f4a56f3b3484-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-j97g2\" (UID: \"aee2ba25-535c-4c4a-8ef4-f4a56f3b3484\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.329457 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-serving-cert\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.329450 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.329740 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.329792 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01e31968-eb99-4a34-a9da-25ffd0101936-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8g9jq\" (UID: \"01e31968-eb99-4a34-a9da-25ffd0101936\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.329833 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07498167-d644-44f1-943b-fda3bd8de13d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ktks\" (UID: \"07498167-d644-44f1-943b-fda3bd8de13d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.330594 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/93d84de9-e75f-4127-b3ee-890375498dc3-console-oauth-config\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.331139 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/becf7050-f3f8-42a3-bf02-cf9347e493e6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.331863 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79c41335-5b2d-4567-93ab-48948dfdd24c-serving-cert\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: E1127 17:11:59.331984 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:11:59.831954056 +0000 UTC m=+142.174780584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.332051 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-encryption-config\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.333396 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34727a2e-1e3a-4371-9052-7df4c6693f44-serving-cert\") pod \"route-controller-manager-6576b87f9c-klkjr\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.334113 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/79c41335-5b2d-4567-93ab-48948dfdd24c-etcd-client\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.334161 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/01e31968-eb99-4a34-a9da-25ffd0101936-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8g9jq\" (UID: \"01e31968-eb99-4a34-a9da-25ffd0101936\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.334458 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aea3e158-90f4-4df7-9fbd-65cfdf94a813-metrics-tls\") pod \"dns-operator-744455d44c-rr6k4\" (UID: \"aea3e158-90f4-4df7-9fbd-65cfdf94a813\") " pod="openshift-dns-operator/dns-operator-744455d44c-rr6k4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.334511 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31d7df6d-769e-46b7-878c-2966896b8646-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x447d\" (UID: \"31d7df6d-769e-46b7-878c-2966896b8646\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.335789 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/165ef6ad-2a73-4126-b694-938ecbe6cd77-serving-cert\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.337400 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/551f21a4-3206-4d98-82a5-82274989d3ae-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-htmtg\" (UID: \"551f21a4-3206-4d98-82a5-82274989d3ae\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-htmtg" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.341377 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39ccadf8-7389-4ffd-a72d-efd46066c233-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4hn5\" (UID: \"39ccadf8-7389-4ffd-a72d-efd46066c233\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.341774 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.342206 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvt68\" (UniqueName: \"kubernetes.io/projected/84fc3c58-35a0-4347-b094-27f5fc4e7aae-kube-api-access-dvt68\") pod \"openshift-config-operator-7777fb866f-hdjtf\" (UID: \"84fc3c58-35a0-4347-b094-27f5fc4e7aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.346094 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d1f4c3b-1bc7-49c0-9aba-b073020ed51f-serving-cert\") pod \"authentication-operator-69f744f599-z84xw\" (UID: \"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.369582 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckxfk\" (UniqueName: \"kubernetes.io/projected/a0015215-3f91-43fc-bbee-2560bb8f4c62-kube-api-access-ckxfk\") pod \"downloads-7954f5f757-r96bc\" (UID: \"a0015215-3f91-43fc-bbee-2560bb8f4c62\") " pod="openshift-console/downloads-7954f5f757-r96bc" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.390658 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtrtg\" (UniqueName: \"kubernetes.io/projected/7f12779a-be6f-4854-adc8-023f3da9c562-kube-api-access-jtrtg\") pod \"machine-approver-56656f9798-h5bxn\" (UID: \"7f12779a-be6f-4854-adc8-023f3da9c562\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.401848 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01e31968-eb99-4a34-a9da-25ffd0101936-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8g9jq\" (UID: \"01e31968-eb99-4a34-a9da-25ffd0101936\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.418100 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt2dl\" (UniqueName: \"kubernetes.io/projected/8412b381-cbf1-4f9c-8e93-6991812b725d-kube-api-access-wt2dl\") pod \"oauth-openshift-558db77b4-jz8bh\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.427232 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.427330 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgc8\" (UniqueName: \"kubernetes.io/projected/228abb37-ff66-48b3-a882-d67ca901a322-kube-api-access-dsgc8\") pod \"marketplace-operator-79b997595-7psgx\" (UID: \"228abb37-ff66-48b3-a882-d67ca901a322\") " pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.427351 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ecf50e14-57e0-49d5-8581-6842527b63bc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nwp5c\" (UID: \"ecf50e14-57e0-49d5-8581-6842527b63bc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" Nov 27 17:11:59 crc kubenswrapper[4792]: E1127 17:11:59.427390 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:11:59.927371126 +0000 UTC m=+142.270197454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.427478 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0f375d2-c00a-49a3-963d-5d2bb71fa625-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cx4g9\" (UID: \"e0f375d2-c00a-49a3-963d-5d2bb71fa625\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cx4g9" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.427550 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0436d85a-59ed-46ee-b01c-d82423c932b0-metrics-tls\") pod \"dns-default-4855s\" (UID: \"0436d85a-59ed-46ee-b01c-d82423c932b0\") " pod="openshift-dns/dns-default-4855s" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.427591 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ac750b0-4cbd-474b-a99c-1ddafb554107-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bt9c5\" (UID: \"8ac750b0-4cbd-474b-a99c-1ddafb554107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.427743 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7418e8a9-d007-44c8-9969-0097ab135a74-signing-key\") pod \"service-ca-9c57cc56f-7b9r2\" (UID: \"7418e8a9-d007-44c8-9969-0097ab135a74\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9r2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.427822 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd629fc6-afbd-4fba-ad8a-af3aa86487b3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hxdbd\" (UID: \"bd629fc6-afbd-4fba-ad8a-af3aa86487b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.427857 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/228abb37-ff66-48b3-a882-d67ca901a322-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7psgx\" (UID: \"228abb37-ff66-48b3-a882-d67ca901a322\") " pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.427910 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ecf50e14-57e0-49d5-8581-6842527b63bc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nwp5c\" (UID: \"ecf50e14-57e0-49d5-8581-6842527b63bc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.427888 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7418e8a9-d007-44c8-9969-0097ab135a74-signing-cabundle\") pod \"service-ca-9c57cc56f-7b9r2\" (UID: \"7418e8a9-d007-44c8-9969-0097ab135a74\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9r2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.427962 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db76n\" (UniqueName: \"kubernetes.io/projected/5d775ccc-6b5a-4d05-8138-4521d5fb8adc-kube-api-access-db76n\") pod \"service-ca-operator-777779d784-tswzz\" (UID: \"5d775ccc-6b5a-4d05-8138-4521d5fb8adc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tswzz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428034 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7vmq\" (UniqueName: \"kubernetes.io/projected/0436d85a-59ed-46ee-b01c-d82423c932b0-kube-api-access-g7vmq\") pod \"dns-default-4855s\" (UID: \"0436d85a-59ed-46ee-b01c-d82423c932b0\") " pod="openshift-dns/dns-default-4855s" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428071 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q9gw\" (UniqueName: \"kubernetes.io/projected/cd1802c8-be73-40c4-b495-54eede995a32-kube-api-access-5q9gw\") pod \"machine-config-controller-84d6567774-t8xxz\" (UID: \"cd1802c8-be73-40c4-b495-54eede995a32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428106 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1a95901-32c9-465a-a8b6-e44c289beb03-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fw9ts\" (UID: \"d1a95901-32c9-465a-a8b6-e44c289beb03\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428164 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/98b6db89-c5c7-4ec3-90dd-013390b75f20-mountpoint-dir\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428250 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d775ccc-6b5a-4d05-8138-4521d5fb8adc-serving-cert\") pod \"service-ca-operator-777779d784-tswzz\" (UID: \"5d775ccc-6b5a-4d05-8138-4521d5fb8adc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tswzz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428285 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1143d96-b3c1-4892-8f6b-3672cab07e9c-config\") pod \"kube-apiserver-operator-766d6c64bb-ltcfx\" (UID: \"b1143d96-b3c1-4892-8f6b-3672cab07e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428347 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0436d85a-59ed-46ee-b01c-d82423c932b0-config-volume\") pod \"dns-default-4855s\" (UID: \"0436d85a-59ed-46ee-b01c-d82423c932b0\") " pod="openshift-dns/dns-default-4855s" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428373 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc302df1-2d39-41a5-be37-eaccc5cde214-webhook-cert\") pod \"packageserver-d55dfcdfc-w7lbb\" (UID: \"bc302df1-2d39-41a5-be37-eaccc5cde214\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428422 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/31dcae62-0572-4873-b054-06b731401d8e-cert\") pod \"ingress-canary-wx7m6\" (UID: \"31dcae62-0572-4873-b054-06b731401d8e\") " pod="openshift-ingress-canary/ingress-canary-wx7m6" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428454 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spfnq\" (UniqueName: \"kubernetes.io/projected/7418e8a9-d007-44c8-9969-0097ab135a74-kube-api-access-spfnq\") pod \"service-ca-9c57cc56f-7b9r2\" (UID: \"7418e8a9-d007-44c8-9969-0097ab135a74\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9r2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428505 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdww2\" (UniqueName: \"kubernetes.io/projected/8ac750b0-4cbd-474b-a99c-1ddafb554107-kube-api-access-bdww2\") pod \"olm-operator-6b444d44fb-bt9c5\" (UID: \"8ac750b0-4cbd-474b-a99c-1ddafb554107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428549 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/101bdb1c-75a1-4d92-90e9-360cece56c1e-service-ca-bundle\") pod \"router-default-5444994796-qgvjw\" (UID: \"101bdb1c-75a1-4d92-90e9-360cece56c1e\") " pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428602 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/98b6db89-c5c7-4ec3-90dd-013390b75f20-socket-dir\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428624 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bc302df1-2d39-41a5-be37-eaccc5cde214-tmpfs\") pod \"packageserver-d55dfcdfc-w7lbb\" (UID: \"bc302df1-2d39-41a5-be37-eaccc5cde214\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428694 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/94e411a4-eb23-4669-ba67-26f4e6cc6605-certs\") pod \"machine-config-server-t47sj\" (UID: \"94e411a4-eb23-4669-ba67-26f4e6cc6605\") " pod="openshift-machine-config-operator/machine-config-server-t47sj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428761 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5542z\" (UniqueName: \"kubernetes.io/projected/606bdb55-1c87-4057-bd78-91a9769dcd1c-kube-api-access-5542z\") pod \"migrator-59844c95c7-8hf2h\" (UID: \"606bdb55-1c87-4057-bd78-91a9769dcd1c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428826 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/228abb37-ff66-48b3-a882-d67ca901a322-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7psgx\" (UID: \"228abb37-ff66-48b3-a882-d67ca901a322\") " pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428860 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc302df1-2d39-41a5-be37-eaccc5cde214-apiservice-cert\") pod \"packageserver-d55dfcdfc-w7lbb\" (UID: \"bc302df1-2d39-41a5-be37-eaccc5cde214\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428893 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd1802c8-be73-40c4-b495-54eede995a32-proxy-tls\") pod \"machine-config-controller-84d6567774-t8xxz\" (UID: \"cd1802c8-be73-40c4-b495-54eede995a32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428925 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-447sk\" (UniqueName: \"kubernetes.io/projected/98b6db89-c5c7-4ec3-90dd-013390b75f20-kube-api-access-447sk\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428955 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j96c\" (UniqueName: \"kubernetes.io/projected/ecf50e14-57e0-49d5-8581-6842527b63bc-kube-api-access-5j96c\") pod \"machine-config-operator-74547568cd-nwp5c\" (UID: \"ecf50e14-57e0-49d5-8581-6842527b63bc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.428997 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/003746e3-d80d-40d7-aac1-03bec863a85d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xk5xs\" (UID: \"003746e3-d80d-40d7-aac1-03bec863a85d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429027 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh89w\" (UniqueName: \"kubernetes.io/projected/101bdb1c-75a1-4d92-90e9-360cece56c1e-kube-api-access-kh89w\") pod \"router-default-5444994796-qgvjw\" (UID: \"101bdb1c-75a1-4d92-90e9-360cece56c1e\") " pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429055 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ecf50e14-57e0-49d5-8581-6842527b63bc-images\") pod \"machine-config-operator-74547568cd-nwp5c\" (UID: \"ecf50e14-57e0-49d5-8581-6842527b63bc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429099 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98b6db89-c5c7-4ec3-90dd-013390b75f20-registration-dir\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429134 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07e202a3-1f47-409c-83f2-a066ddb1ffe2-secret-volume\") pod \"collect-profiles-29404380-47pdq\" (UID: \"07e202a3-1f47-409c-83f2-a066ddb1ffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429179 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/98b6db89-c5c7-4ec3-90dd-013390b75f20-csi-data-dir\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429210 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnfl2\" (UniqueName: \"kubernetes.io/projected/e0f375d2-c00a-49a3-963d-5d2bb71fa625-kube-api-access-jnfl2\") pod \"control-plane-machine-set-operator-78cbb6b69f-cx4g9\" (UID: \"e0f375d2-c00a-49a3-963d-5d2bb71fa625\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cx4g9" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429258 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/101bdb1c-75a1-4d92-90e9-360cece56c1e-stats-auth\") pod \"router-default-5444994796-qgvjw\" (UID: \"101bdb1c-75a1-4d92-90e9-360cece56c1e\") " pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429300 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd1802c8-be73-40c4-b495-54eede995a32-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-t8xxz\" (UID: \"cd1802c8-be73-40c4-b495-54eede995a32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429330 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ecf50e14-57e0-49d5-8581-6842527b63bc-proxy-tls\") pod \"machine-config-operator-74547568cd-nwp5c\" (UID: \"ecf50e14-57e0-49d5-8581-6842527b63bc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429365 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1143d96-b3c1-4892-8f6b-3672cab07e9c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ltcfx\" (UID: \"b1143d96-b3c1-4892-8f6b-3672cab07e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429398 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d1a95901-32c9-465a-a8b6-e44c289beb03-metrics-tls\") pod \"ingress-operator-5b745b69d9-fw9ts\" (UID: \"d1a95901-32c9-465a-a8b6-e44c289beb03\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429434 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqjcj\" (UniqueName: \"kubernetes.io/projected/fbf0570b-233b-4046-8d07-e164c66cf429-kube-api-access-wqjcj\") pod \"multus-admission-controller-857f4d67dd-5jcbf\" (UID: \"fbf0570b-233b-4046-8d07-e164c66cf429\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5jcbf" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429467 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqf2f\" (UniqueName: \"kubernetes.io/projected/d1a95901-32c9-465a-a8b6-e44c289beb03-kube-api-access-tqf2f\") pod \"ingress-operator-5b745b69d9-fw9ts\" (UID: \"d1a95901-32c9-465a-a8b6-e44c289beb03\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429500 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9dpn\" (UniqueName: \"kubernetes.io/projected/bd629fc6-afbd-4fba-ad8a-af3aa86487b3-kube-api-access-k9dpn\") pod \"package-server-manager-789f6589d5-hxdbd\" (UID: \"bd629fc6-afbd-4fba-ad8a-af3aa86487b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429532 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jpf6\" (UniqueName: \"kubernetes.io/projected/bc302df1-2d39-41a5-be37-eaccc5cde214-kube-api-access-6jpf6\") pod \"packageserver-d55dfcdfc-w7lbb\" (UID: \"bc302df1-2d39-41a5-be37-eaccc5cde214\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429569 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l8vc\" (UniqueName: \"kubernetes.io/projected/0a712c8a-b3db-40bd-9e6f-cd23b095e2a4-kube-api-access-7l8vc\") pod \"catalog-operator-68c6474976-8dhs9\" (UID: \"0a712c8a-b3db-40bd-9e6f-cd23b095e2a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429600 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07e202a3-1f47-409c-83f2-a066ddb1ffe2-config-volume\") pod \"collect-profiles-29404380-47pdq\" (UID: \"07e202a3-1f47-409c-83f2-a066ddb1ffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429679 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/94e411a4-eb23-4669-ba67-26f4e6cc6605-node-bootstrap-token\") pod \"machine-config-server-t47sj\" (UID: \"94e411a4-eb23-4669-ba67-26f4e6cc6605\") " pod="openshift-machine-config-operator/machine-config-server-t47sj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429714 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/003746e3-d80d-40d7-aac1-03bec863a85d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xk5xs\" (UID: \"003746e3-d80d-40d7-aac1-03bec863a85d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429762 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/101bdb1c-75a1-4d92-90e9-360cece56c1e-metrics-certs\") pod \"router-default-5444994796-qgvjw\" (UID: \"101bdb1c-75a1-4d92-90e9-360cece56c1e\") " pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429797 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fbf0570b-233b-4046-8d07-e164c66cf429-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5jcbf\" (UID: \"fbf0570b-233b-4046-8d07-e164c66cf429\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5jcbf" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429828 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1a95901-32c9-465a-a8b6-e44c289beb03-trusted-ca\") pod \"ingress-operator-5b745b69d9-fw9ts\" (UID: \"d1a95901-32c9-465a-a8b6-e44c289beb03\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429873 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/98b6db89-c5c7-4ec3-90dd-013390b75f20-plugins-dir\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429908 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/003746e3-d80d-40d7-aac1-03bec863a85d-config\") pod \"kube-controller-manager-operator-78b949d7b-xk5xs\" (UID: \"003746e3-d80d-40d7-aac1-03bec863a85d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429938 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d775ccc-6b5a-4d05-8138-4521d5fb8adc-config\") pod \"service-ca-operator-777779d784-tswzz\" (UID: \"5d775ccc-6b5a-4d05-8138-4521d5fb8adc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tswzz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.429970 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm5bp\" (UniqueName: \"kubernetes.io/projected/07e202a3-1f47-409c-83f2-a066ddb1ffe2-kube-api-access-tm5bp\") pod \"collect-profiles-29404380-47pdq\" (UID: \"07e202a3-1f47-409c-83f2-a066ddb1ffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.430014 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlx75\" (UniqueName: \"kubernetes.io/projected/94e411a4-eb23-4669-ba67-26f4e6cc6605-kube-api-access-mlx75\") pod \"machine-config-server-t47sj\" (UID: \"94e411a4-eb23-4669-ba67-26f4e6cc6605\") " pod="openshift-machine-config-operator/machine-config-server-t47sj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.430047 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0a712c8a-b3db-40bd-9e6f-cd23b095e2a4-srv-cert\") pod \"catalog-operator-68c6474976-8dhs9\" (UID: \"0a712c8a-b3db-40bd-9e6f-cd23b095e2a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.430100 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/101bdb1c-75a1-4d92-90e9-360cece56c1e-default-certificate\") pod \"router-default-5444994796-qgvjw\" (UID: \"101bdb1c-75a1-4d92-90e9-360cece56c1e\") " pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.430143 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1143d96-b3c1-4892-8f6b-3672cab07e9c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ltcfx\" (UID: \"b1143d96-b3c1-4892-8f6b-3672cab07e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.430177 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ac750b0-4cbd-474b-a99c-1ddafb554107-srv-cert\") pod \"olm-operator-6b444d44fb-bt9c5\" (UID: \"8ac750b0-4cbd-474b-a99c-1ddafb554107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.430210 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs2rd\" (UniqueName: \"kubernetes.io/projected/31dcae62-0572-4873-b054-06b731401d8e-kube-api-access-hs2rd\") pod \"ingress-canary-wx7m6\" (UID: \"31dcae62-0572-4873-b054-06b731401d8e\") " pod="openshift-ingress-canary/ingress-canary-wx7m6" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.430252 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0a712c8a-b3db-40bd-9e6f-cd23b095e2a4-profile-collector-cert\") pod \"catalog-operator-68c6474976-8dhs9\" (UID: \"0a712c8a-b3db-40bd-9e6f-cd23b095e2a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.430500 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0f375d2-c00a-49a3-963d-5d2bb71fa625-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-cx4g9\" (UID: \"e0f375d2-c00a-49a3-963d-5d2bb71fa625\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cx4g9" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.430609 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/98b6db89-c5c7-4ec3-90dd-013390b75f20-mountpoint-dir\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.431800 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7418e8a9-d007-44c8-9969-0097ab135a74-signing-cabundle\") pod \"service-ca-9c57cc56f-7b9r2\" (UID: \"7418e8a9-d007-44c8-9969-0097ab135a74\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9r2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.432478 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc302df1-2d39-41a5-be37-eaccc5cde214-webhook-cert\") pod \"packageserver-d55dfcdfc-w7lbb\" (UID: \"bc302df1-2d39-41a5-be37-eaccc5cde214\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.432759 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd629fc6-afbd-4fba-ad8a-af3aa86487b3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hxdbd\" (UID: \"bd629fc6-afbd-4fba-ad8a-af3aa86487b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.432796 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ac750b0-4cbd-474b-a99c-1ddafb554107-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bt9c5\" (UID: \"8ac750b0-4cbd-474b-a99c-1ddafb554107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.433743 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1143d96-b3c1-4892-8f6b-3672cab07e9c-config\") pod \"kube-apiserver-operator-766d6c64bb-ltcfx\" (UID: \"b1143d96-b3c1-4892-8f6b-3672cab07e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.434072 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0a712c8a-b3db-40bd-9e6f-cd23b095e2a4-profile-collector-cert\") pod \"catalog-operator-68c6474976-8dhs9\" (UID: \"0a712c8a-b3db-40bd-9e6f-cd23b095e2a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.434128 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0436d85a-59ed-46ee-b01c-d82423c932b0-metrics-tls\") pod \"dns-default-4855s\" (UID: \"0436d85a-59ed-46ee-b01c-d82423c932b0\") " pod="openshift-dns/dns-default-4855s" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.434397 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98b6db89-c5c7-4ec3-90dd-013390b75f20-registration-dir\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.434699 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/98b6db89-c5c7-4ec3-90dd-013390b75f20-socket-dir\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.435221 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/bc302df1-2d39-41a5-be37-eaccc5cde214-tmpfs\") pod \"packageserver-d55dfcdfc-w7lbb\" (UID: \"bc302df1-2d39-41a5-be37-eaccc5cde214\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.435409 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0436d85a-59ed-46ee-b01c-d82423c932b0-config-volume\") pod \"dns-default-4855s\" (UID: \"0436d85a-59ed-46ee-b01c-d82423c932b0\") " pod="openshift-dns/dns-default-4855s" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.435968 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/98b6db89-c5c7-4ec3-90dd-013390b75f20-csi-data-dir\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.436143 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ecf50e14-57e0-49d5-8581-6842527b63bc-proxy-tls\") pod \"machine-config-operator-74547568cd-nwp5c\" (UID: \"ecf50e14-57e0-49d5-8581-6842527b63bc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.436825 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d1a95901-32c9-465a-a8b6-e44c289beb03-metrics-tls\") pod \"ingress-operator-5b745b69d9-fw9ts\" (UID: \"d1a95901-32c9-465a-a8b6-e44c289beb03\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.436905 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/98b6db89-c5c7-4ec3-90dd-013390b75f20-plugins-dir\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.436987 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7418e8a9-d007-44c8-9969-0097ab135a74-signing-key\") pod \"service-ca-9c57cc56f-7b9r2\" (UID: \"7418e8a9-d007-44c8-9969-0097ab135a74\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9r2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.437288 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d1a95901-32c9-465a-a8b6-e44c289beb03-trusted-ca\") pod \"ingress-operator-5b745b69d9-fw9ts\" (UID: \"d1a95901-32c9-465a-a8b6-e44c289beb03\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.437691 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd1802c8-be73-40c4-b495-54eede995a32-proxy-tls\") pod \"machine-config-controller-84d6567774-t8xxz\" (UID: \"cd1802c8-be73-40c4-b495-54eede995a32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.437695 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/003746e3-d80d-40d7-aac1-03bec863a85d-config\") pod \"kube-controller-manager-operator-78b949d7b-xk5xs\" (UID: \"003746e3-d80d-40d7-aac1-03bec863a85d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.438438 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d775ccc-6b5a-4d05-8138-4521d5fb8adc-config\") pod \"service-ca-operator-777779d784-tswzz\" (UID: \"5d775ccc-6b5a-4d05-8138-4521d5fb8adc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tswzz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.438884 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ecf50e14-57e0-49d5-8581-6842527b63bc-images\") pod \"machine-config-operator-74547568cd-nwp5c\" (UID: \"ecf50e14-57e0-49d5-8581-6842527b63bc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.439036 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07e202a3-1f47-409c-83f2-a066ddb1ffe2-secret-volume\") pod \"collect-profiles-29404380-47pdq\" (UID: \"07e202a3-1f47-409c-83f2-a066ddb1ffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.439064 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/101bdb1c-75a1-4d92-90e9-360cece56c1e-stats-auth\") pod \"router-default-5444994796-qgvjw\" (UID: \"101bdb1c-75a1-4d92-90e9-360cece56c1e\") " pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.439246 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07e202a3-1f47-409c-83f2-a066ddb1ffe2-config-volume\") pod \"collect-profiles-29404380-47pdq\" (UID: \"07e202a3-1f47-409c-83f2-a066ddb1ffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.439263 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpksg\" (UniqueName: \"kubernetes.io/projected/1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f-kube-api-access-fpksg\") pod \"console-operator-58897d9998-g7kpw\" (UID: \"1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f\") " pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.439638 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/228abb37-ff66-48b3-a882-d67ca901a322-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7psgx\" (UID: \"228abb37-ff66-48b3-a882-d67ca901a322\") " pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.439895 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/94e411a4-eb23-4669-ba67-26f4e6cc6605-certs\") pod \"machine-config-server-t47sj\" (UID: \"94e411a4-eb23-4669-ba67-26f4e6cc6605\") " pod="openshift-machine-config-operator/machine-config-server-t47sj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.439948 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/31dcae62-0572-4873-b054-06b731401d8e-cert\") pod \"ingress-canary-wx7m6\" (UID: \"31dcae62-0572-4873-b054-06b731401d8e\") " pod="openshift-ingress-canary/ingress-canary-wx7m6" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.434708 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d775ccc-6b5a-4d05-8138-4521d5fb8adc-serving-cert\") pod \"service-ca-operator-777779d784-tswzz\" (UID: \"5d775ccc-6b5a-4d05-8138-4521d5fb8adc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tswzz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.440433 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/228abb37-ff66-48b3-a882-d67ca901a322-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7psgx\" (UID: \"228abb37-ff66-48b3-a882-d67ca901a322\") " pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.440712 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/101bdb1c-75a1-4d92-90e9-360cece56c1e-service-ca-bundle\") pod \"router-default-5444994796-qgvjw\" (UID: \"101bdb1c-75a1-4d92-90e9-360cece56c1e\") " pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.440853 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd1802c8-be73-40c4-b495-54eede995a32-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-t8xxz\" (UID: \"cd1802c8-be73-40c4-b495-54eede995a32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.442567 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/94e411a4-eb23-4669-ba67-26f4e6cc6605-node-bootstrap-token\") pod \"machine-config-server-t47sj\" (UID: \"94e411a4-eb23-4669-ba67-26f4e6cc6605\") " pod="openshift-machine-config-operator/machine-config-server-t47sj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.442675 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/003746e3-d80d-40d7-aac1-03bec863a85d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xk5xs\" (UID: \"003746e3-d80d-40d7-aac1-03bec863a85d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.442823 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/101bdb1c-75a1-4d92-90e9-360cece56c1e-default-certificate\") pod \"router-default-5444994796-qgvjw\" (UID: \"101bdb1c-75a1-4d92-90e9-360cece56c1e\") " pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.443026 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fbf0570b-233b-4046-8d07-e164c66cf429-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5jcbf\" (UID: \"fbf0570b-233b-4046-8d07-e164c66cf429\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5jcbf" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.443319 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ac750b0-4cbd-474b-a99c-1ddafb554107-srv-cert\") pod \"olm-operator-6b444d44fb-bt9c5\" (UID: \"8ac750b0-4cbd-474b-a99c-1ddafb554107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.443355 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc302df1-2d39-41a5-be37-eaccc5cde214-apiservice-cert\") pod \"packageserver-d55dfcdfc-w7lbb\" (UID: \"bc302df1-2d39-41a5-be37-eaccc5cde214\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.443548 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0a712c8a-b3db-40bd-9e6f-cd23b095e2a4-srv-cert\") pod \"catalog-operator-68c6474976-8dhs9\" (UID: \"0a712c8a-b3db-40bd-9e6f-cd23b095e2a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.444086 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/101bdb1c-75a1-4d92-90e9-360cece56c1e-metrics-certs\") pod \"router-default-5444994796-qgvjw\" (UID: \"101bdb1c-75a1-4d92-90e9-360cece56c1e\") " pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.445265 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1143d96-b3c1-4892-8f6b-3672cab07e9c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ltcfx\" (UID: \"b1143d96-b3c1-4892-8f6b-3672cab07e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.455151 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sggcf\" (UniqueName: \"kubernetes.io/projected/0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1-kube-api-access-sggcf\") pod \"apiserver-76f77b778f-smxkj\" (UID: \"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1\") " pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.477267 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4t29\" (UniqueName: \"kubernetes.io/projected/31d7df6d-769e-46b7-878c-2966896b8646-kube-api-access-h4t29\") pod \"openshift-apiserver-operator-796bbdcf4f-x447d\" (UID: \"31d7df6d-769e-46b7-878c-2966896b8646\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.496131 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls52n\" (UniqueName: \"kubernetes.io/projected/aee2ba25-535c-4c4a-8ef4-f4a56f3b3484-kube-api-access-ls52n\") pod \"openshift-controller-manager-operator-756b6f6bc6-j97g2\" (UID: \"aee2ba25-535c-4c4a-8ef4-f4a56f3b3484\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.515634 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.522492 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm64t\" (UniqueName: \"kubernetes.io/projected/79c41335-5b2d-4567-93ab-48948dfdd24c-kube-api-access-pm64t\") pod \"etcd-operator-b45778765-qgj4h\" (UID: \"79c41335-5b2d-4567-93ab-48948dfdd24c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.531315 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: E1127 17:11:59.531711 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:00.03169364 +0000 UTC m=+142.374519958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.541299 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj2lc\" (UniqueName: \"kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-kube-api-access-bj2lc\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.562773 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvnsx\" (UniqueName: \"kubernetes.io/projected/1d1f4c3b-1bc7-49c0-9aba-b073020ed51f-kube-api-access-jvnsx\") pod \"authentication-operator-69f744f599-z84xw\" (UID: \"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.566252 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.583481 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xfqw\" (UniqueName: \"kubernetes.io/projected/551f21a4-3206-4d98-82a5-82274989d3ae-kube-api-access-6xfqw\") pod \"cluster-samples-operator-665b6dd947-htmtg\" (UID: \"551f21a4-3206-4d98-82a5-82274989d3ae\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-htmtg" Nov 27 17:11:59 crc kubenswrapper[4792]: W1127 17:11:59.585381 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f12779a_be6f_4854_adc8_023f3da9c562.slice/crio-282372b5c28c0d7c059cc92a830eba0337b74ca339a45e5b7ffdf6319f225b0a WatchSource:0}: Error finding container 282372b5c28c0d7c059cc92a830eba0337b74ca339a45e5b7ffdf6319f225b0a: Status 404 returned error can't find the container with id 282372b5c28c0d7c059cc92a830eba0337b74ca339a45e5b7ffdf6319f225b0a Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.597083 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.597953 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq47b\" (UniqueName: \"kubernetes.io/projected/93d84de9-e75f-4127-b3ee-890375498dc3-kube-api-access-dq47b\") pod \"console-f9d7485db-k86pd\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.606882 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.613228 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.622140 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.622946 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-bound-sa-token\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.636221 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:11:59 crc kubenswrapper[4792]: E1127 17:11:59.637158 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:00.137140677 +0000 UTC m=+142.479966995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.638539 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m5tv\" (UniqueName: \"kubernetes.io/projected/7bbb7ab5-a68c-402c-99d8-9cd47c361ccd-kube-api-access-4m5tv\") pod \"machine-api-operator-5694c8668f-6lwpq\" (UID: \"7bbb7ab5-a68c-402c-99d8-9cd47c361ccd\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.643578 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-htmtg" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.653208 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.658635 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-r96bc" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.680487 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5r4q\" (UniqueName: \"kubernetes.io/projected/34727a2e-1e3a-4371-9052-7df4c6693f44-kube-api-access-d5r4q\") pod \"route-controller-manager-6576b87f9c-klkjr\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.698268 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z57fn\" (UniqueName: \"kubernetes.io/projected/01e31968-eb99-4a34-a9da-25ffd0101936-kube-api-access-z57fn\") pod \"cluster-image-registry-operator-dc59b4c8b-8g9jq\" (UID: \"01e31968-eb99-4a34-a9da-25ffd0101936\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.698657 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.719753 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kkdd\" (UniqueName: \"kubernetes.io/projected/39ccadf8-7389-4ffd-a72d-efd46066c233-kube-api-access-8kkdd\") pod \"kube-storage-version-migrator-operator-b67b599dd-k4hn5\" (UID: \"39ccadf8-7389-4ffd-a72d-efd46066c233\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.741736 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: E1127 17:11:59.742184 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:00.242163962 +0000 UTC m=+142.584990360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.745238 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07498167-d644-44f1-943b-fda3bd8de13d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8ktks\" (UID: \"07498167-d644-44f1-943b-fda3bd8de13d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.753128 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.761164 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8s7l\" (UniqueName: \"kubernetes.io/projected/00795f3d-b1b4-494b-8898-380798319532-kube-api-access-p8s7l\") pod \"controller-manager-879f6c89f-8gb65\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.766203 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.781219 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvwqm\" (UniqueName: \"kubernetes.io/projected/165ef6ad-2a73-4126-b694-938ecbe6cd77-kube-api-access-bvwqm\") pod \"apiserver-7bbb656c7d-bql6w\" (UID: \"165ef6ad-2a73-4126-b694-938ecbe6cd77\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.791905 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.807256 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr2gg\" (UniqueName: \"kubernetes.io/projected/aea3e158-90f4-4df7-9fbd-65cfdf94a813-kube-api-access-kr2gg\") pod \"dns-operator-744455d44c-rr6k4\" (UID: \"aea3e158-90f4-4df7-9fbd-65cfdf94a813\") " pod="openshift-dns-operator/dns-operator-744455d44c-rr6k4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.818856 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-g7kpw"] Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.820915 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgc8\" (UniqueName: \"kubernetes.io/projected/228abb37-ff66-48b3-a882-d67ca901a322-kube-api-access-dsgc8\") pod \"marketplace-operator-79b997595-7psgx\" (UID: \"228abb37-ff66-48b3-a882-d67ca901a322\") " pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.843502 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:11:59 crc kubenswrapper[4792]: E1127 17:11:59.844069 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:00.344049694 +0000 UTC m=+142.686876022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.848675 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7vmq\" (UniqueName: \"kubernetes.io/projected/0436d85a-59ed-46ee-b01c-d82423c932b0-kube-api-access-g7vmq\") pod \"dns-default-4855s\" (UID: \"0436d85a-59ed-46ee-b01c-d82423c932b0\") " pod="openshift-dns/dns-default-4855s" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.851491 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.858903 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q9gw\" (UniqueName: \"kubernetes.io/projected/cd1802c8-be73-40c4-b495-54eede995a32-kube-api-access-5q9gw\") pod \"machine-config-controller-84d6567774-t8xxz\" (UID: \"cd1802c8-be73-40c4-b495-54eede995a32\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.877590 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d1a95901-32c9-465a-a8b6-e44c289beb03-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fw9ts\" (UID: \"d1a95901-32c9-465a-a8b6-e44c289beb03\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.880020 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.887851 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rr6k4" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.897283 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db76n\" (UniqueName: \"kubernetes.io/projected/5d775ccc-6b5a-4d05-8138-4521d5fb8adc-kube-api-access-db76n\") pod \"service-ca-operator-777779d784-tswzz\" (UID: \"5d775ccc-6b5a-4d05-8138-4521d5fb8adc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tswzz" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.925382 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1143d96-b3c1-4892-8f6b-3672cab07e9c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ltcfx\" (UID: \"b1143d96-b3c1-4892-8f6b-3672cab07e9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.936814 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.945286 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:11:59 crc kubenswrapper[4792]: E1127 17:11:59.945563 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:00.445552034 +0000 UTC m=+142.788378352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.959152 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-447sk\" (UniqueName: \"kubernetes.io/projected/98b6db89-c5c7-4ec3-90dd-013390b75f20-kube-api-access-447sk\") pod \"csi-hostpathplugin-fz8kn\" (UID: \"98b6db89-c5c7-4ec3-90dd-013390b75f20\") " pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.965618 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j96c\" (UniqueName: \"kubernetes.io/projected/ecf50e14-57e0-49d5-8581-6842527b63bc-kube-api-access-5j96c\") pod \"machine-config-operator-74547568cd-nwp5c\" (UID: \"ecf50e14-57e0-49d5-8581-6842527b63bc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.966399 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.979033 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh89w\" (UniqueName: \"kubernetes.io/projected/101bdb1c-75a1-4d92-90e9-360cece56c1e-kube-api-access-kh89w\") pod \"router-default-5444994796-qgvjw\" (UID: \"101bdb1c-75a1-4d92-90e9-360cece56c1e\") " pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.981473 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5" Nov 27 17:11:59 crc kubenswrapper[4792]: I1127 17:11:59.990503 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.002541 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnfl2\" (UniqueName: \"kubernetes.io/projected/e0f375d2-c00a-49a3-963d-5d2bb71fa625-kube-api-access-jnfl2\") pod \"control-plane-machine-set-operator-78cbb6b69f-cx4g9\" (UID: \"e0f375d2-c00a-49a3-963d-5d2bb71fa625\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cx4g9" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.015536 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.020473 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5542z\" (UniqueName: \"kubernetes.io/projected/606bdb55-1c87-4057-bd78-91a9769dcd1c-kube-api-access-5542z\") pod \"migrator-59844c95c7-8hf2h\" (UID: \"606bdb55-1c87-4057-bd78-91a9769dcd1c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2h" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.024739 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.034454 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cx4g9" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.040800 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.042799 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqjcj\" (UniqueName: \"kubernetes.io/projected/fbf0570b-233b-4046-8d07-e164c66cf429-kube-api-access-wqjcj\") pod \"multus-admission-controller-857f4d67dd-5jcbf\" (UID: \"fbf0570b-233b-4046-8d07-e164c66cf429\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5jcbf" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.046234 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:00 crc kubenswrapper[4792]: E1127 17:12:00.046761 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:00.546746396 +0000 UTC m=+142.889572714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.058099 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5jcbf" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.061513 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqf2f\" (UniqueName: \"kubernetes.io/projected/d1a95901-32c9-465a-a8b6-e44c289beb03-kube-api-access-tqf2f\") pod \"ingress-operator-5b745b69d9-fw9ts\" (UID: \"d1a95901-32c9-465a-a8b6-e44c289beb03\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.071023 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.073194 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2h" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.080494 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.082156 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9dpn\" (UniqueName: \"kubernetes.io/projected/bd629fc6-afbd-4fba-ad8a-af3aa86487b3-kube-api-access-k9dpn\") pod \"package-server-manager-789f6589d5-hxdbd\" (UID: \"bd629fc6-afbd-4fba-ad8a-af3aa86487b3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.087948 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-k86pd"] Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.101330 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qgj4h"] Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.105746 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tswzz" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.116739 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jpf6\" (UniqueName: \"kubernetes.io/projected/bc302df1-2d39-41a5-be37-eaccc5cde214-kube-api-access-6jpf6\") pod \"packageserver-d55dfcdfc-w7lbb\" (UID: \"bc302df1-2d39-41a5-be37-eaccc5cde214\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.118025 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l8vc\" (UniqueName: \"kubernetes.io/projected/0a712c8a-b3db-40bd-9e6f-cd23b095e2a4-kube-api-access-7l8vc\") pod \"catalog-operator-68c6474976-8dhs9\" (UID: \"0a712c8a-b3db-40bd-9e6f-cd23b095e2a4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" Nov 27 17:12:00 crc kubenswrapper[4792]: W1127 17:12:00.120476 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod101bdb1c_75a1_4d92_90e9_360cece56c1e.slice/crio-88d7321cea93c8ece90bb50a0a48cdae43fa06ed21c3542c0765c967df96e28d WatchSource:0}: Error finding container 88d7321cea93c8ece90bb50a0a48cdae43fa06ed21c3542c0765c967df96e28d: Status 404 returned error can't find the container with id 88d7321cea93c8ece90bb50a0a48cdae43fa06ed21c3542c0765c967df96e28d Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.121175 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.128414 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.135368 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4855s" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.136849 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm5bp\" (UniqueName: \"kubernetes.io/projected/07e202a3-1f47-409c-83f2-a066ddb1ffe2-kube-api-access-tm5bp\") pod \"collect-profiles-29404380-47pdq\" (UID: \"07e202a3-1f47-409c-83f2-a066ddb1ffe2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" Nov 27 17:12:00 crc kubenswrapper[4792]: W1127 17:12:00.141014 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93d84de9_e75f_4127_b3ee_890375498dc3.slice/crio-c73e6b99baefd2c9ed97ff23b12c864b3580d34ac9c781cfb4bc42376b062b80 WatchSource:0}: Error finding container c73e6b99baefd2c9ed97ff23b12c864b3580d34ac9c781cfb4bc42376b062b80: Status 404 returned error can't find the container with id c73e6b99baefd2c9ed97ff23b12c864b3580d34ac9c781cfb4bc42376b062b80 Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.142074 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.147520 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:00 crc kubenswrapper[4792]: E1127 17:12:00.147978 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:00.647963308 +0000 UTC m=+142.990789626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.159840 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlx75\" (UniqueName: \"kubernetes.io/projected/94e411a4-eb23-4669-ba67-26f4e6cc6605-kube-api-access-mlx75\") pod \"machine-config-server-t47sj\" (UID: \"94e411a4-eb23-4669-ba67-26f4e6cc6605\") " pod="openshift-machine-config-operator/machine-config-server-t47sj" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.164351 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.170022 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-t47sj" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.190233 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/003746e3-d80d-40d7-aac1-03bec863a85d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xk5xs\" (UID: \"003746e3-d80d-40d7-aac1-03bec863a85d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.202085 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdww2\" (UniqueName: \"kubernetes.io/projected/8ac750b0-4cbd-474b-a99c-1ddafb554107-kube-api-access-bdww2\") pod \"olm-operator-6b444d44fb-bt9c5\" (UID: \"8ac750b0-4cbd-474b-a99c-1ddafb554107\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.209651 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jz8bh"] Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.218805 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d"] Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.220050 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf"] Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.235158 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs2rd\" (UniqueName: \"kubernetes.io/projected/31dcae62-0572-4873-b054-06b731401d8e-kube-api-access-hs2rd\") pod \"ingress-canary-wx7m6\" (UID: \"31dcae62-0572-4873-b054-06b731401d8e\") " pod="openshift-ingress-canary/ingress-canary-wx7m6" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.235275 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spfnq\" (UniqueName: \"kubernetes.io/projected/7418e8a9-d007-44c8-9969-0097ab135a74-kube-api-access-spfnq\") pod \"service-ca-9c57cc56f-7b9r2\" (UID: \"7418e8a9-d007-44c8-9969-0097ab135a74\") " pod="openshift-service-ca/service-ca-9c57cc56f-7b9r2" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.248316 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:00 crc kubenswrapper[4792]: E1127 17:12:00.248464 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:00.748444358 +0000 UTC m=+143.091270676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.248508 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:00 crc kubenswrapper[4792]: E1127 17:12:00.249489 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:00.749479079 +0000 UTC m=+143.092305397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.308153 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.339709 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2"] Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.349830 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.352507 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:00 crc kubenswrapper[4792]: E1127 17:12:00.352979 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:00.852963918 +0000 UTC m=+143.195790236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.353320 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-htmtg"] Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.370111 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-r96bc"] Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.372889 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-t47sj" event={"ID":"94e411a4-eb23-4669-ba67-26f4e6cc6605","Type":"ContainerStarted","Data":"6b9e425e965e74e25e356e552e7c99464068407a7ec530c860496811dedbc6a4"} Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.374970 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d" event={"ID":"31d7df6d-769e-46b7-878c-2966896b8646","Type":"ContainerStarted","Data":"37593809efa7e8aac0968d74243c71ba8d9c38ca8a5c2ae76ff7e8e14dfb9a0a"} Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.376674 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" event={"ID":"79c41335-5b2d-4567-93ab-48948dfdd24c","Type":"ContainerStarted","Data":"d83ce21ccfcc8195192502a9baf8d5b23047ac724ec5e9395ed4807abb354ac7"} Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.377840 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-g7kpw" event={"ID":"1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f","Type":"ContainerStarted","Data":"502b91de4732c759140aae9262e337c8fb5126ed85f81306e57f023e1613f72a"} Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.377870 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-g7kpw" event={"ID":"1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f","Type":"ContainerStarted","Data":"7043fe6d443cc891b056b8c47edf2943d351b59012a654e70e82872b83105121"} Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.378274 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.380187 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" event={"ID":"7f12779a-be6f-4854-adc8-023f3da9c562","Type":"ContainerStarted","Data":"219e2ba0a29a03e9338728b8835047745bf1d2b0ffdeee97524750a2e8c727a2"} Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.380229 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" event={"ID":"7f12779a-be6f-4854-adc8-023f3da9c562","Type":"ContainerStarted","Data":"282372b5c28c0d7c059cc92a830eba0337b74ca339a45e5b7ffdf6319f225b0a"} Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.381921 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-k86pd" event={"ID":"93d84de9-e75f-4127-b3ee-890375498dc3","Type":"ContainerStarted","Data":"c73e6b99baefd2c9ed97ff23b12c864b3580d34ac9c781cfb4bc42376b062b80"} Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.383411 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" event={"ID":"84fc3c58-35a0-4347-b094-27f5fc4e7aae","Type":"ContainerStarted","Data":"f577ee22e57528ee010f71e51228ce86f78b898a4c003a8c31b8d4e7928c17f5"} Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.384236 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qgvjw" event={"ID":"101bdb1c-75a1-4d92-90e9-360cece56c1e","Type":"ContainerStarted","Data":"88d7321cea93c8ece90bb50a0a48cdae43fa06ed21c3542c0765c967df96e28d"} Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.387635 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7b9r2" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.397699 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.406030 4792 patch_prober.go:28] interesting pod/console-operator-58897d9998-g7kpw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.406083 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-g7kpw" podUID="1338a809-c8ec-4bfe-bbeb-ad2c4fc66e0f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.413061 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.456617 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:00 crc kubenswrapper[4792]: E1127 17:12:00.457431 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:00.957416696 +0000 UTC m=+143.300243004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.474365 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wx7m6" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.557944 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:00 crc kubenswrapper[4792]: E1127 17:12:00.558140 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:01.058084491 +0000 UTC m=+143.400910809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.558439 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:00 crc kubenswrapper[4792]: E1127 17:12:00.558767 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:01.058756661 +0000 UTC m=+143.401582969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.661126 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:00 crc kubenswrapper[4792]: E1127 17:12:00.661476 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:01.161453397 +0000 UTC m=+143.504279715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.714162 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr"] Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.714196 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-smxkj"] Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.733557 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-g7kpw" podStartSLOduration=121.733540715 podStartE2EDuration="2m1.733540715s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:00.73234202 +0000 UTC m=+143.075168338" watchObservedRunningTime="2025-11-27 17:12:00.733540715 +0000 UTC m=+143.076367033" Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.765340 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:00 crc kubenswrapper[4792]: E1127 17:12:00.765748 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:01.26573497 +0000 UTC m=+143.608561288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:00 crc kubenswrapper[4792]: W1127 17:12:00.787530 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34727a2e_1e3a_4371_9052_7df4c6693f44.slice/crio-6afbf23f932f8a0366155e396aed428496389639bf6ffef251369fa7c2a4aa6a WatchSource:0}: Error finding container 6afbf23f932f8a0366155e396aed428496389639bf6ffef251369fa7c2a4aa6a: Status 404 returned error can't find the container with id 6afbf23f932f8a0366155e396aed428496389639bf6ffef251369fa7c2a4aa6a Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.869692 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:00 crc kubenswrapper[4792]: E1127 17:12:00.870214 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:01.370199359 +0000 UTC m=+143.713025677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:00 crc kubenswrapper[4792]: I1127 17:12:00.971370 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:00 crc kubenswrapper[4792]: E1127 17:12:00.971705 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:01.471693228 +0000 UTC m=+143.814519536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.073237 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:01 crc kubenswrapper[4792]: E1127 17:12:01.073536 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:01.573519828 +0000 UTC m=+143.916346146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.114755 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.143321 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-z84xw"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.148742 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.155375 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6lwpq"] Nov 27 17:12:01 crc kubenswrapper[4792]: W1127 17:12:01.171175 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod165ef6ad_2a73_4126_b694_938ecbe6cd77.slice/crio-773b1750916f8617ea21d9b53b6bfdfbd2dd9b760e746cc7be4e9cbca0c20a85 WatchSource:0}: Error finding container 773b1750916f8617ea21d9b53b6bfdfbd2dd9b760e746cc7be4e9cbca0c20a85: Status 404 returned error can't find the container with id 773b1750916f8617ea21d9b53b6bfdfbd2dd9b760e746cc7be4e9cbca0c20a85 Nov 27 17:12:01 crc kubenswrapper[4792]: W1127 17:12:01.171798 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bbb7ab5_a68c_402c_99d8_9cd47c361ccd.slice/crio-a2ed8bd345361df0b36f81adb2a9deb8f38ee60c2d796d2e6011165a933a7efd WatchSource:0}: Error finding container a2ed8bd345361df0b36f81adb2a9deb8f38ee60c2d796d2e6011165a933a7efd: Status 404 returned error can't find the container with id a2ed8bd345361df0b36f81adb2a9deb8f38ee60c2d796d2e6011165a933a7efd Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.173126 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8gb65"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.174841 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:01 crc kubenswrapper[4792]: E1127 17:12:01.175245 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:01.675231004 +0000 UTC m=+144.018057322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:01 crc kubenswrapper[4792]: W1127 17:12:01.189018 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d1f4c3b_1bc7_49c0_9aba_b073020ed51f.slice/crio-a492bdc1d619ce025d170d450f2aa2a58dd71dee683273803bbab6c66e4e0cbc WatchSource:0}: Error finding container a492bdc1d619ce025d170d450f2aa2a58dd71dee683273803bbab6c66e4e0cbc: Status 404 returned error can't find the container with id a492bdc1d619ce025d170d450f2aa2a58dd71dee683273803bbab6c66e4e0cbc Nov 27 17:12:01 crc kubenswrapper[4792]: W1127 17:12:01.195384 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00795f3d_b1b4_494b_8898_380798319532.slice/crio-744d899056391a7de511e27cbc7a86f2993305fba44b023243bb04ff3b293f0d WatchSource:0}: Error finding container 744d899056391a7de511e27cbc7a86f2993305fba44b023243bb04ff3b293f0d: Status 404 returned error can't find the container with id 744d899056391a7de511e27cbc7a86f2993305fba44b023243bb04ff3b293f0d Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.276526 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:01 crc kubenswrapper[4792]: E1127 17:12:01.277272 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:01.77725646 +0000 UTC m=+144.120082778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.378107 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:01 crc kubenswrapper[4792]: E1127 17:12:01.378391 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:01.87837965 +0000 UTC m=+144.221205968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.431209 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.447788 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2h"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.449930 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.454448 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" event={"ID":"84fc3c58-35a0-4347-b094-27f5fc4e7aae","Type":"ContainerStarted","Data":"9a3fb9a5f89e508148dc72db0b55dde8b70a17b2c4828a1174d3fc209c73a4d5"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.457229 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" event={"ID":"79c41335-5b2d-4567-93ab-48948dfdd24c","Type":"ContainerStarted","Data":"3a4067307a7925609a73889ac51367b7063e1916017e7949fbb2f4d2c7ecb236"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.461560 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" event={"ID":"00795f3d-b1b4-494b-8898-380798319532","Type":"ContainerStarted","Data":"744d899056391a7de511e27cbc7a86f2993305fba44b023243bb04ff3b293f0d"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.462007 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.464578 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" event={"ID":"7f12779a-be6f-4854-adc8-023f3da9c562","Type":"ContainerStarted","Data":"73879784a46fdec1e1b851f3f8a296b41b52ae44f02e15a3b2430d1295b95499"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.466052 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qgvjw" event={"ID":"101bdb1c-75a1-4d92-90e9-360cece56c1e","Type":"ContainerStarted","Data":"3f9f124d7829121d2eeaad703df33518f170763ceb2858565e45eaf2328c4279"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.466213 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.469339 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" event={"ID":"34727a2e-1e3a-4371-9052-7df4c6693f44","Type":"ContainerStarted","Data":"bc2d6227c3e8c46115f2e459dae6891bd49e9f754e41012903561e707ce84078"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.469358 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" event={"ID":"34727a2e-1e3a-4371-9052-7df4c6693f44","Type":"ContainerStarted","Data":"6afbf23f932f8a0366155e396aed428496389639bf6ffef251369fa7c2a4aa6a"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.469969 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.470944 4792 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-klkjr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.470980 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" podUID="34727a2e-1e3a-4371-9052-7df4c6693f44" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 27 17:12:01 crc kubenswrapper[4792]: W1127 17:12:01.474063 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01e31968_eb99_4a34_a9da_25ffd0101936.slice/crio-93c4f0a952cd83235f19d0696f01f83b904f6401823b6050a53a97b054a68d0c WatchSource:0}: Error finding container 93c4f0a952cd83235f19d0696f01f83b904f6401823b6050a53a97b054a68d0c: Status 404 returned error can't find the container with id 93c4f0a952cd83235f19d0696f01f83b904f6401823b6050a53a97b054a68d0c Nov 27 17:12:01 crc kubenswrapper[4792]: W1127 17:12:01.476088 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39ccadf8_7389_4ffd_a72d_efd46066c233.slice/crio-d4a3be0672aa2f42cec834153104b4cfdf871d41472b27946deebca95b345650 WatchSource:0}: Error finding container d4a3be0672aa2f42cec834153104b4cfdf871d41472b27946deebca95b345650: Status 404 returned error can't find the container with id d4a3be0672aa2f42cec834153104b4cfdf871d41472b27946deebca95b345650 Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.476735 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-k86pd" event={"ID":"93d84de9-e75f-4127-b3ee-890375498dc3","Type":"ContainerStarted","Data":"84c5c48aac089760cb61a81051db5633b72ff8c41bc5fd37282c4563d85bfc51"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.485083 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d" event={"ID":"31d7df6d-769e-46b7-878c-2966896b8646","Type":"ContainerStarted","Data":"68db214e5568f861114437d8535759f14bb3077606bc0b07b0d8c5f3edb3a598"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.485708 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:01 crc kubenswrapper[4792]: E1127 17:12:01.486741 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:01.986721463 +0000 UTC m=+144.329547831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.491947 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" event={"ID":"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f","Type":"ContainerStarted","Data":"a492bdc1d619ce025d170d450f2aa2a58dd71dee683273803bbab6c66e4e0cbc"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.493579 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rr6k4"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.511685 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.519861 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2" event={"ID":"aee2ba25-535c-4c4a-8ef4-f4a56f3b3484","Type":"ContainerStarted","Data":"babdfb8368c113cb5a00dacbb54d7db1354dba8f57fb0e15fb9d5fb64a66c835"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.519912 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2" event={"ID":"aee2ba25-535c-4c4a-8ef4-f4a56f3b3484","Type":"ContainerStarted","Data":"b3a055d7931dd0590afdf55979f3e535a3df89161b248797e36cf05dca02e84a"} Nov 27 17:12:01 crc kubenswrapper[4792]: W1127 17:12:01.528753 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod606bdb55_1c87_4057_bd78_91a9769dcd1c.slice/crio-6a10c2c92a3628e8c5b4f9cfde4aa1bb6a49b54c7135f41b03d32ce6c9c67a3d WatchSource:0}: Error finding container 6a10c2c92a3628e8c5b4f9cfde4aa1bb6a49b54c7135f41b03d32ce6c9c67a3d: Status 404 returned error can't find the container with id 6a10c2c92a3628e8c5b4f9cfde4aa1bb6a49b54c7135f41b03d32ce6c9c67a3d Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.534085 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-t47sj" event={"ID":"94e411a4-eb23-4669-ba67-26f4e6cc6605","Type":"ContainerStarted","Data":"f295f31bb12bc828115ed78f95badf194856c9542ff23a8b9ab1c776e8ce6d05"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.542382 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" event={"ID":"8412b381-cbf1-4f9c-8e93-6991812b725d","Type":"ContainerStarted","Data":"d796ecb60e246a664a762fa8c47df0e308df6b112f8c1c0546c1d21133221218"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.542423 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" event={"ID":"8412b381-cbf1-4f9c-8e93-6991812b725d","Type":"ContainerStarted","Data":"a5c41ce3754aa09f937a3f1ae86a9e0530519e3fb155695a4d5db1bb688c0f73"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.542749 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.555100 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-r96bc" event={"ID":"a0015215-3f91-43fc-bbee-2560bb8f4c62","Type":"ContainerStarted","Data":"054c987007b0faeaa9a0d65cc0dd533c5646a2fa6cf36fa4364982667da61ebe"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.555157 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-r96bc" event={"ID":"a0015215-3f91-43fc-bbee-2560bb8f4c62","Type":"ContainerStarted","Data":"73b8b893a348d8b1bb13354ef6ba8705bcfd6ceb5b645c8b5e0aeb7c577e90e7"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.555497 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-r96bc" Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.558724 4792 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-jz8bh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.31:6443/healthz\": dial tcp 10.217.0.31:6443: connect: connection refused" start-of-body= Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.558755 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-r96bc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.558777 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" podUID="8412b381-cbf1-4f9c-8e93-6991812b725d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.31:6443/healthz\": dial tcp 10.217.0.31:6443: connect: connection refused" Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.558793 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r96bc" podUID="a0015215-3f91-43fc-bbee-2560bb8f4c62" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.559415 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" event={"ID":"7bbb7ab5-a68c-402c-99d8-9cd47c361ccd","Type":"ContainerStarted","Data":"a2ed8bd345361df0b36f81adb2a9deb8f38ee60c2d796d2e6011165a933a7efd"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.569014 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" event={"ID":"ecf50e14-57e0-49d5-8581-6842527b63bc","Type":"ContainerStarted","Data":"2aa94c405764b4e7d4f2ff3523da4fc59946e158dabd72ae27a6f71caa1ef536"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.571397 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" event={"ID":"165ef6ad-2a73-4126-b694-938ecbe6cd77","Type":"ContainerStarted","Data":"773b1750916f8617ea21d9b53b6bfdfbd2dd9b760e746cc7be4e9cbca0c20a85"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.577510 4792 generic.go:334] "Generic (PLEG): container finished" podID="0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1" containerID="40e86e8b702ca0e5a360316dafda107fd9cf18f093b940d75e4ef93603afa828" exitCode=0 Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.577592 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-smxkj" event={"ID":"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1","Type":"ContainerDied","Data":"40e86e8b702ca0e5a360316dafda107fd9cf18f093b940d75e4ef93603afa828"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.577612 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-smxkj" event={"ID":"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1","Type":"ContainerStarted","Data":"9d8fe6a104ba233395e31ef5e73c3b697d02a24a6dce20382575bbb533a8b3b3"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.582032 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-htmtg" event={"ID":"551f21a4-3206-4d98-82a5-82274989d3ae","Type":"ContainerStarted","Data":"ba7e0fb0473afcf9188b99abe5d62da3bb83eea845f7ffe3e88e99ec86715ae1"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.582075 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-htmtg" event={"ID":"551f21a4-3206-4d98-82a5-82274989d3ae","Type":"ContainerStarted","Data":"0b119a7363500a0ec6634c1b8696f07da27d4c68abde6239601ccc684ec051f7"} Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.590344 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.592218 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-g7kpw" Nov 27 17:12:01 crc kubenswrapper[4792]: E1127 17:12:01.595581 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:02.095566941 +0000 UTC m=+144.438393259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.692677 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:01 crc kubenswrapper[4792]: E1127 17:12:01.693915 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:02.193887627 +0000 UTC m=+144.536713945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.781768 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.795139 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:01 crc kubenswrapper[4792]: E1127 17:12:01.795665 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:02.295632325 +0000 UTC m=+144.638458643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.797475 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-t47sj" podStartSLOduration=4.797451049 podStartE2EDuration="4.797451049s" podCreationTimestamp="2025-11-27 17:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:01.778923379 +0000 UTC m=+144.121749707" watchObservedRunningTime="2025-11-27 17:12:01.797451049 +0000 UTC m=+144.140277387" Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.817198 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cx4g9"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.830242 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7psgx"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.832702 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tswzz"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.842338 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4855s"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.846847 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.867599 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5jcbf"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.869413 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fz8kn"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.870182 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-k86pd" podStartSLOduration=122.870172926 podStartE2EDuration="2m2.870172926s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:01.857223472 +0000 UTC m=+144.200049790" watchObservedRunningTime="2025-11-27 17:12:01.870172926 +0000 UTC m=+144.212999244" Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.879170 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.886445 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.896477 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:01 crc kubenswrapper[4792]: E1127 17:12:01.896862 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:02.396847977 +0000 UTC m=+144.739674295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.904005 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7b9r2"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.915101 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.916190 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" podStartSLOduration=121.91617164 podStartE2EDuration="2m1.91617164s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:01.913612354 +0000 UTC m=+144.256438672" watchObservedRunningTime="2025-11-27 17:12:01.91617164 +0000 UTC m=+144.258997958" Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.932731 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.932982 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.948501 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wx7m6"] Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.950991 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" podStartSLOduration=122.950970842 podStartE2EDuration="2m2.950970842s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:01.947310324 +0000 UTC m=+144.290136642" watchObservedRunningTime="2025-11-27 17:12:01.950970842 +0000 UTC m=+144.293797160" Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.977135 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-qgj4h" podStartSLOduration=121.977103297 podStartE2EDuration="2m1.977103297s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:01.975545331 +0000 UTC m=+144.318371639" watchObservedRunningTime="2025-11-27 17:12:01.977103297 +0000 UTC m=+144.319929615" Nov 27 17:12:01 crc kubenswrapper[4792]: I1127 17:12:01.997575 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:01 crc kubenswrapper[4792]: E1127 17:12:01.997864 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:02.497853733 +0000 UTC m=+144.840680051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.021107 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qgvjw" podStartSLOduration=122.021091042 podStartE2EDuration="2m2.021091042s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:02.019433103 +0000 UTC m=+144.362259421" watchObservedRunningTime="2025-11-27 17:12:02.021091042 +0000 UTC m=+144.363917360" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.027398 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.036765 4792 patch_prober.go:28] interesting pod/router-default-5444994796-qgvjw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 27 17:12:02 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Nov 27 17:12:02 crc kubenswrapper[4792]: [+]process-running ok Nov 27 17:12:02 crc kubenswrapper[4792]: healthz check failed Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.036834 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgvjw" podUID="101bdb1c-75a1-4d92-90e9-360cece56c1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.098239 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-h5bxn" podStartSLOduration=125.098220069 podStartE2EDuration="2m5.098220069s" podCreationTimestamp="2025-11-27 17:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:02.096162588 +0000 UTC m=+144.438988896" watchObservedRunningTime="2025-11-27 17:12:02.098220069 +0000 UTC m=+144.441046387" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.098489 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:02 crc kubenswrapper[4792]: E1127 17:12:02.098846 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:02.598833098 +0000 UTC m=+144.941659406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.143919 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-r96bc" podStartSLOduration=123.143902104 podStartE2EDuration="2m3.143902104s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:02.143259765 +0000 UTC m=+144.486086093" watchObservedRunningTime="2025-11-27 17:12:02.143902104 +0000 UTC m=+144.486728422" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.178682 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j97g2" podStartSLOduration=123.178632434 podStartE2EDuration="2m3.178632434s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:02.175079149 +0000 UTC m=+144.517905477" watchObservedRunningTime="2025-11-27 17:12:02.178632434 +0000 UTC m=+144.521458772" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.199712 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:02 crc kubenswrapper[4792]: E1127 17:12:02.201333 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:02.701317587 +0000 UTC m=+145.044143905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.262281 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x447d" podStartSLOduration=123.262255834 podStartE2EDuration="2m3.262255834s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:02.25669471 +0000 UTC m=+144.599521058" watchObservedRunningTime="2025-11-27 17:12:02.262255834 +0000 UTC m=+144.605082152" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.301390 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:02 crc kubenswrapper[4792]: E1127 17:12:02.301841 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:02.801812658 +0000 UTC m=+145.144638976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.404511 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:02 crc kubenswrapper[4792]: E1127 17:12:02.405007 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:02.904986758 +0000 UTC m=+145.247813136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.505962 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:02 crc kubenswrapper[4792]: E1127 17:12:02.506664 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:03.006628332 +0000 UTC m=+145.349454650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.604195 4792 generic.go:334] "Generic (PLEG): container finished" podID="84fc3c58-35a0-4347-b094-27f5fc4e7aae" containerID="9a3fb9a5f89e508148dc72db0b55dde8b70a17b2c4828a1174d3fc209c73a4d5" exitCode=0 Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.604255 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" event={"ID":"84fc3c58-35a0-4347-b094-27f5fc4e7aae","Type":"ContainerDied","Data":"9a3fb9a5f89e508148dc72db0b55dde8b70a17b2c4828a1174d3fc209c73a4d5"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.604283 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" event={"ID":"84fc3c58-35a0-4347-b094-27f5fc4e7aae","Type":"ContainerStarted","Data":"8b454212d157736b7b98f2c6401abab3bb9a2434e992b153acfd13d8bf220b16"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.604511 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.608326 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:02 crc kubenswrapper[4792]: E1127 17:12:02.608701 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:03.108688199 +0000 UTC m=+145.451514517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.611437 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" event={"ID":"8ac750b0-4cbd-474b-a99c-1ddafb554107","Type":"ContainerStarted","Data":"3e41553579fb3a7e9ce559079617b8a2a4db26c1b9b903d825a861119cb99bcc"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.621972 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" podStartSLOduration=123.621955413 podStartE2EDuration="2m3.621955413s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:02.621674134 +0000 UTC m=+144.964500452" watchObservedRunningTime="2025-11-27 17:12:02.621955413 +0000 UTC m=+144.964781731" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.623488 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-smxkj" event={"ID":"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1","Type":"ContainerStarted","Data":"b95d46101336fd9bae80bbefa7daf4c47589b110c3aadf01e785c8aecd639449"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.630582 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs" event={"ID":"003746e3-d80d-40d7-aac1-03bec863a85d","Type":"ContainerStarted","Data":"cb212055a32ba5045ff797f24872b195a029d1ff3b775e7807e4fcf8c75784e2"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.663750 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5" event={"ID":"39ccadf8-7389-4ffd-a72d-efd46066c233","Type":"ContainerStarted","Data":"5d03fb4e8bce699710801790d3a07766c871b53ed86f3a98ff6ab9de6820c93f"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.663792 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5" event={"ID":"39ccadf8-7389-4ffd-a72d-efd46066c233","Type":"ContainerStarted","Data":"d4a3be0672aa2f42cec834153104b4cfdf871d41472b27946deebca95b345650"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.675460 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2h" event={"ID":"606bdb55-1c87-4057-bd78-91a9769dcd1c","Type":"ContainerStarted","Data":"89709243f671b4a09287bced73dc7878858a5650c24041e96e11c17a3b283c64"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.675515 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2h" event={"ID":"606bdb55-1c87-4057-bd78-91a9769dcd1c","Type":"ContainerStarted","Data":"6a10c2c92a3628e8c5b4f9cfde4aa1bb6a49b54c7135f41b03d32ce6c9c67a3d"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.687497 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k4hn5" podStartSLOduration=122.687479446 podStartE2EDuration="2m2.687479446s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:02.680458318 +0000 UTC m=+145.023284636" watchObservedRunningTime="2025-11-27 17:12:02.687479446 +0000 UTC m=+145.030305764" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.707935 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks" event={"ID":"07498167-d644-44f1-943b-fda3bd8de13d","Type":"ContainerStarted","Data":"f9e703a19d07607f3cb1a257df4a278b80fa8c130c11322b7c1743e0c0c3af4f"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.707978 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks" event={"ID":"07498167-d644-44f1-943b-fda3bd8de13d","Type":"ContainerStarted","Data":"6c741e31ffec08ed786742f26e8c6452e5da3a79dca42d93ea141efc378e86ff"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.709207 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:02 crc kubenswrapper[4792]: E1127 17:12:02.710089 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:03.210075956 +0000 UTC m=+145.552902274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.719381 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" event={"ID":"7bbb7ab5-a68c-402c-99d8-9cd47c361ccd","Type":"ContainerStarted","Data":"791c0733f6952412fb849e914954255175dbe6f72e64d2b6b210bf221730dbee"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.719444 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" event={"ID":"7bbb7ab5-a68c-402c-99d8-9cd47c361ccd","Type":"ContainerStarted","Data":"94ae2c680c0614f06fef736f6e0a70027ab9df45953f20526d1b5f0474a7dca6"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.731004 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8ktks" podStartSLOduration=122.730989567 podStartE2EDuration="2m2.730989567s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:02.730201073 +0000 UTC m=+145.073027391" watchObservedRunningTime="2025-11-27 17:12:02.730989567 +0000 UTC m=+145.073815875" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.746934 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4855s" event={"ID":"0436d85a-59ed-46ee-b01c-d82423c932b0","Type":"ContainerStarted","Data":"15ab36205bc58ddd5968092bc1315d7d3030ac555df51739ab2629f10e44f516"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.764274 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cx4g9" event={"ID":"e0f375d2-c00a-49a3-963d-5d2bb71fa625","Type":"ContainerStarted","Data":"7f7190d19d86df6a26763c1cc93e7f054a629ee35d60c9224f04dbed76cc0ffa"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.764331 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cx4g9" event={"ID":"e0f375d2-c00a-49a3-963d-5d2bb71fa625","Type":"ContainerStarted","Data":"6b11cdcd771cde7542ee9149e10b91c1165fc9f0b4aed2d995ab9e90e0346177"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.793255 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-6lwpq" podStartSLOduration=122.793238923 podStartE2EDuration="2m2.793238923s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:02.774003592 +0000 UTC m=+145.116829910" watchObservedRunningTime="2025-11-27 17:12:02.793238923 +0000 UTC m=+145.136065241" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.801144 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-cx4g9" podStartSLOduration=122.801122537 podStartE2EDuration="2m2.801122537s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:02.801029044 +0000 UTC m=+145.143855362" watchObservedRunningTime="2025-11-27 17:12:02.801122537 +0000 UTC m=+145.143948855" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.811286 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:02 crc kubenswrapper[4792]: E1127 17:12:02.812097 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:03.312082822 +0000 UTC m=+145.654909140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.855280 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz" event={"ID":"cd1802c8-be73-40c4-b495-54eede995a32","Type":"ContainerStarted","Data":"1a994333d2eac94a8dc8359ab96da34c61bdcfa3363c9240240d295567037c51"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.855553 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz" event={"ID":"cd1802c8-be73-40c4-b495-54eede995a32","Type":"ContainerStarted","Data":"443ecd3c44bc0ea6bc09452f2d1329e856b952dc948dbd5b409ff94ce75a88f9"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.867496 4792 generic.go:334] "Generic (PLEG): container finished" podID="165ef6ad-2a73-4126-b694-938ecbe6cd77" containerID="4199cb185331867c95ef8a36289dde7aed0418f55c6b322e5c93c54990cabd84" exitCode=0 Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.867575 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" event={"ID":"165ef6ad-2a73-4126-b694-938ecbe6cd77","Type":"ContainerDied","Data":"4199cb185331867c95ef8a36289dde7aed0418f55c6b322e5c93c54990cabd84"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.871596 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" event={"ID":"00795f3d-b1b4-494b-8898-380798319532","Type":"ContainerStarted","Data":"4540391e49b8b5e167438da387815eed9ffe635be654e7f3dd4d64a58eef86af"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.874708 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.911619 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" event={"ID":"01e31968-eb99-4a34-a9da-25ffd0101936","Type":"ContainerStarted","Data":"1135a22e2ae951271eb54bcc3efaab2f4c2a5fb7984836d7e2cb89ceba5a588f"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.911716 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" event={"ID":"01e31968-eb99-4a34-a9da-25ffd0101936","Type":"ContainerStarted","Data":"93c4f0a952cd83235f19d0696f01f83b904f6401823b6050a53a97b054a68d0c"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.912213 4792 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-8gb65 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.912237 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" podUID="00795f3d-b1b4-494b-8898-380798319532" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.912796 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:02 crc kubenswrapper[4792]: E1127 17:12:02.912929 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:03.412916232 +0000 UTC m=+145.755742550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.913049 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:02 crc kubenswrapper[4792]: E1127 17:12:02.913282 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:03.413273913 +0000 UTC m=+145.756100241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.920901 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" event={"ID":"bc302df1-2d39-41a5-be37-eaccc5cde214","Type":"ContainerStarted","Data":"60bf8e635033d03b5439dddfd94b814dff8c2b613154c3bccba9b0cd979ac397"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.920939 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" event={"ID":"bc302df1-2d39-41a5-be37-eaccc5cde214","Type":"ContainerStarted","Data":"dae9161ec7cb84d9c103bad6a13de5a1206f0258007185a3f4d6e82564e80f70"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.921711 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.926324 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-w7lbb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.926364 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" podUID="bc302df1-2d39-41a5-be37-eaccc5cde214" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.928752 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" event={"ID":"228abb37-ff66-48b3-a882-d67ca901a322","Type":"ContainerStarted","Data":"1c41c2c2f100456748486f3629080fcf52f3a10e833b71a29c230ec50caa4410"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.928785 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" event={"ID":"228abb37-ff66-48b3-a882-d67ca901a322","Type":"ContainerStarted","Data":"3cc0bb6d0ba7fe5eb8fe83c6895f7dca21cf2478a8efb3b58fd3cda7594f811f"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.929734 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.933752 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" event={"ID":"ecf50e14-57e0-49d5-8581-6842527b63bc","Type":"ContainerStarted","Data":"8eac9a7640b849d0add4442e04f60528fd3faca2cb814bc329c30fbbce002caa"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.933791 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" event={"ID":"ecf50e14-57e0-49d5-8581-6842527b63bc","Type":"ContainerStarted","Data":"088653fd335f69cf90b98da6a829d629790d62668bd02c6b936ab497bd12cb2f"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.939328 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" podStartSLOduration=123.939306475 podStartE2EDuration="2m3.939306475s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:02.938909293 +0000 UTC m=+145.281735611" watchObservedRunningTime="2025-11-27 17:12:02.939306475 +0000 UTC m=+145.282132803" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.951196 4792 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7psgx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.951194 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd" event={"ID":"bd629fc6-afbd-4fba-ad8a-af3aa86487b3","Type":"ContainerStarted","Data":"b94f67e17062a64066e9bd2f230a9f7ce471c839f93f33aef3a678d773f34cfe"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.951227 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" podUID="228abb37-ff66-48b3-a882-d67ca901a322" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.959286 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" event={"ID":"0a712c8a-b3db-40bd-9e6f-cd23b095e2a4","Type":"ContainerStarted","Data":"f0cdedd2480594df608faee0d485ee1249c398382c2e387318adcc6cee518d83"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.962325 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.970503 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-htmtg" event={"ID":"551f21a4-3206-4d98-82a5-82274989d3ae","Type":"ContainerStarted","Data":"18a3aacc844d10c83694d3ffe2c14d157425bc6aba04b8c0dee52af62dc9579e"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.977019 4792 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8dhs9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.977060 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" podUID="0a712c8a-b3db-40bd-9e6f-cd23b095e2a4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.979911 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nwp5c" podStartSLOduration=122.979896859 podStartE2EDuration="2m2.979896859s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:02.962830963 +0000 UTC m=+145.305657291" watchObservedRunningTime="2025-11-27 17:12:02.979896859 +0000 UTC m=+145.322723177" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.985069 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" event={"ID":"07e202a3-1f47-409c-83f2-a066ddb1ffe2","Type":"ContainerStarted","Data":"1e323dc74ba22df4788dbdfc52627d6e291317dee499d83bddbcd9aa5f895342"} Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.985968 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8g9jq" podStartSLOduration=122.98297044 podStartE2EDuration="2m2.98297044s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:02.979078185 +0000 UTC m=+145.321904503" watchObservedRunningTime="2025-11-27 17:12:02.98297044 +0000 UTC m=+145.325796758" Nov 27 17:12:02 crc kubenswrapper[4792]: I1127 17:12:02.999561 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" event={"ID":"d1a95901-32c9-465a-a8b6-e44c289beb03","Type":"ContainerStarted","Data":"516c52d87a5448dc1a9b07b8471887b8c8fdd8591d54a78d61b1e45ebce9f7cb"} Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.024005 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.025159 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx" event={"ID":"b1143d96-b3c1-4892-8f6b-3672cab07e9c","Type":"ContainerStarted","Data":"d03dbd7ad1d9b8e6b1f05bbdd9026bb09346b0ca9de4db325760bf09294432b3"} Nov 27 17:12:03 crc kubenswrapper[4792]: E1127 17:12:03.025218 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:03.525204753 +0000 UTC m=+145.868031071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.035758 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7b9r2" event={"ID":"7418e8a9-d007-44c8-9969-0097ab135a74","Type":"ContainerStarted","Data":"de2fd5f31b952d1b3db594b4049404827fc34e9d6d3ea17a27d3493d6df66bb6"} Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.038207 4792 patch_prober.go:28] interesting pod/router-default-5444994796-qgvjw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 27 17:12:03 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Nov 27 17:12:03 crc kubenswrapper[4792]: [+]process-running ok Nov 27 17:12:03 crc kubenswrapper[4792]: healthz check failed Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.038289 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgvjw" podUID="101bdb1c-75a1-4d92-90e9-360cece56c1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.047120 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" podStartSLOduration=123.047103252 podStartE2EDuration="2m3.047103252s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:03.002282753 +0000 UTC m=+145.345109071" watchObservedRunningTime="2025-11-27 17:12:03.047103252 +0000 UTC m=+145.389929560" Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.065487 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" podStartSLOduration=123.065472887 podStartE2EDuration="2m3.065472887s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:03.0375726 +0000 UTC m=+145.380398908" watchObservedRunningTime="2025-11-27 17:12:03.065472887 +0000 UTC m=+145.408299205" Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.078056 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" event={"ID":"98b6db89-c5c7-4ec3-90dd-013390b75f20","Type":"ContainerStarted","Data":"918bdeb1ea7376b11570410f518fe72ae9ff6acfad65b05ae4a60f15456e27a7"} Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.078604 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" podStartSLOduration=124.078585586 podStartE2EDuration="2m4.078585586s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:03.076218566 +0000 UTC m=+145.419044884" watchObservedRunningTime="2025-11-27 17:12:03.078585586 +0000 UTC m=+145.421411904" Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.087964 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" event={"ID":"1d1f4c3b-1bc7-49c0-9aba-b073020ed51f","Type":"ContainerStarted","Data":"5e91881ad4b503edcb2b71addbb421758125b693de084860b092d524efbe8283"} Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.088035 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rr6k4" event={"ID":"aea3e158-90f4-4df7-9fbd-65cfdf94a813","Type":"ContainerStarted","Data":"2685bea4053e30bb4673b2c6ecdfd8a019647c854cdb3bad11738e095b24ec96"} Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.090107 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5jcbf" event={"ID":"fbf0570b-233b-4046-8d07-e164c66cf429","Type":"ContainerStarted","Data":"0e9a3be8aa9197fceb606badc7d85f6f840dcee700d6d436632f863999f767d9"} Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.095247 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tswzz" event={"ID":"5d775ccc-6b5a-4d05-8138-4521d5fb8adc","Type":"ContainerStarted","Data":"fdbcaf369d35abe1b9c3eadac2cec14e5064f336022a709512f0cacd0d6a435f"} Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.095280 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tswzz" event={"ID":"5d775ccc-6b5a-4d05-8138-4521d5fb8adc","Type":"ContainerStarted","Data":"8a738111cb4309f59b92559ae6bcf7aa40a72653ed50b91240a2c8ee27d38020"} Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.100261 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" podStartSLOduration=123.100250509 podStartE2EDuration="2m3.100250509s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:03.098492246 +0000 UTC m=+145.441318564" watchObservedRunningTime="2025-11-27 17:12:03.100250509 +0000 UTC m=+145.443076827" Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.115224 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wx7m6" event={"ID":"31dcae62-0572-4873-b054-06b731401d8e","Type":"ContainerStarted","Data":"6b3c9e09fbaa94a3b61e1cc94ccf48f9eceffd485cce60d8d003d8f71d81a6e8"} Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.118848 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-r96bc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.118890 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r96bc" podUID="a0015215-3f91-43fc-bbee-2560bb8f4c62" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.133824 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-htmtg" podStartSLOduration=124.133809324 podStartE2EDuration="2m4.133809324s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:03.132142454 +0000 UTC m=+145.474968772" watchObservedRunningTime="2025-11-27 17:12:03.133809324 +0000 UTC m=+145.476635642" Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.134177 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.136682 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:03 crc kubenswrapper[4792]: E1127 17:12:03.137203 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:03.637188364 +0000 UTC m=+145.980014682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.165970 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.166109 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx" podStartSLOduration=123.166089751 podStartE2EDuration="2m3.166089751s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:03.163813754 +0000 UTC m=+145.506640072" watchObservedRunningTime="2025-11-27 17:12:03.166089751 +0000 UTC m=+145.508916069" Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.218564 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-wx7m6" podStartSLOduration=6.218542917 podStartE2EDuration="6.218542917s" podCreationTimestamp="2025-11-27 17:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:03.187994301 +0000 UTC m=+145.530820619" watchObservedRunningTime="2025-11-27 17:12:03.218542917 +0000 UTC m=+145.561369235" Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.240234 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:03 crc kubenswrapper[4792]: E1127 17:12:03.242705 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:03.742678783 +0000 UTC m=+146.085505141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.268160 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-z84xw" podStartSLOduration=124.268141668 podStartE2EDuration="2m4.268141668s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:03.268009514 +0000 UTC m=+145.610835832" watchObservedRunningTime="2025-11-27 17:12:03.268141668 +0000 UTC m=+145.610967976" Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.346791 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:03 crc kubenswrapper[4792]: E1127 17:12:03.347216 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:03.847201753 +0000 UTC m=+146.190028081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.353583 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tswzz" podStartSLOduration=123.353567672 podStartE2EDuration="2m3.353567672s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:03.305576238 +0000 UTC m=+145.648402556" watchObservedRunningTime="2025-11-27 17:12:03.353567672 +0000 UTC m=+145.696393990" Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.449216 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:03 crc kubenswrapper[4792]: E1127 17:12:03.449880 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:03.949865908 +0000 UTC m=+146.292692226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.550722 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:03 crc kubenswrapper[4792]: E1127 17:12:03.555034 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:04.055019017 +0000 UTC m=+146.397845335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.651620 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:03 crc kubenswrapper[4792]: E1127 17:12:03.651751 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:04.151729445 +0000 UTC m=+146.494555763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.652245 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:03 crc kubenswrapper[4792]: E1127 17:12:03.652601 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:04.15258316 +0000 UTC m=+146.495409478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.754336 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:03 crc kubenswrapper[4792]: E1127 17:12:03.754580 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:04.254556445 +0000 UTC m=+146.597382763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.754832 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:03 crc kubenswrapper[4792]: E1127 17:12:03.755122 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:04.255113191 +0000 UTC m=+146.597939509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.856139 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:03 crc kubenswrapper[4792]: E1127 17:12:03.856431 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:04.356415536 +0000 UTC m=+146.699241854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:03 crc kubenswrapper[4792]: I1127 17:12:03.957681 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:03 crc kubenswrapper[4792]: E1127 17:12:03.958072 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:04.45805622 +0000 UTC m=+146.800882538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.029835 4792 patch_prober.go:28] interesting pod/router-default-5444994796-qgvjw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 27 17:12:04 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Nov 27 17:12:04 crc kubenswrapper[4792]: [+]process-running ok Nov 27 17:12:04 crc kubenswrapper[4792]: healthz check failed Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.029890 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgvjw" podUID="101bdb1c-75a1-4d92-90e9-360cece56c1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.059230 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:04 crc kubenswrapper[4792]: E1127 17:12:04.059424 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:04.559389186 +0000 UTC m=+146.902215504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.059785 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:04 crc kubenswrapper[4792]: E1127 17:12:04.060145 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:04.560130508 +0000 UTC m=+146.902956826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.121040 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7b9r2" event={"ID":"7418e8a9-d007-44c8-9969-0097ab135a74","Type":"ContainerStarted","Data":"8429ad2833974704451322a9b383240350fb57fd8e07dda6e297d8c94efd836b"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.122860 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" event={"ID":"0a712c8a-b3db-40bd-9e6f-cd23b095e2a4","Type":"ContainerStarted","Data":"775ea66f57f000def9d2647fe787432b4d6237c325c1d330cd5e15843e575899"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.125101 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rr6k4" event={"ID":"aea3e158-90f4-4df7-9fbd-65cfdf94a813","Type":"ContainerStarted","Data":"3091697374f5ca696df303ada68dd8ac83a40e8e501e7a7d109053ec2ee8eca7"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.125148 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rr6k4" event={"ID":"aea3e158-90f4-4df7-9fbd-65cfdf94a813","Type":"ContainerStarted","Data":"019e7ce159b3603ecc156e9936b3a78eec70a690a78e42c0114b63e68ce19fb2"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.130258 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5jcbf" event={"ID":"fbf0570b-233b-4046-8d07-e164c66cf429","Type":"ContainerStarted","Data":"962a6d888c99003f2e0151dab60fce703e4a567913ed6e48e6d1ee8745e76195"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.130325 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5jcbf" event={"ID":"fbf0570b-233b-4046-8d07-e164c66cf429","Type":"ContainerStarted","Data":"78beb2c1821b1d37e744595468593a50336131c50a4016e07faa8deb44836f17"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.132197 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" event={"ID":"98b6db89-c5c7-4ec3-90dd-013390b75f20","Type":"ContainerStarted","Data":"d1b7bbfbc7d38c69efce7a69115b824ea84a6b77a406f0a8db6233ddf6d12192"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.134356 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" event={"ID":"165ef6ad-2a73-4126-b694-938ecbe6cd77","Type":"ContainerStarted","Data":"ef8d368f3eeac5c04150cd4d21b2ed590e74d0be1d5259ac761d3d8f4ba3a80c"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.135698 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ltcfx" event={"ID":"b1143d96-b3c1-4892-8f6b-3672cab07e9c","Type":"ContainerStarted","Data":"106e77abdd034c4dc2451b6129390ac8aee7822b8a2cc504706c1b82e1043704"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.136573 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8dhs9" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.138359 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd" event={"ID":"bd629fc6-afbd-4fba-ad8a-af3aa86487b3","Type":"ContainerStarted","Data":"7b5ec7f5b7b4f71ab4498ea0b96afcb22cf974bab7c589d5e6d6b96b8c024e9d"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.138394 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd" event={"ID":"bd629fc6-afbd-4fba-ad8a-af3aa86487b3","Type":"ContainerStarted","Data":"7424b220fd93a2f74681e8cba5213df2d2ebf4b88cff42ae03481bf3c7c6b275"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.138410 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.140114 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2h" event={"ID":"606bdb55-1c87-4057-bd78-91a9769dcd1c","Type":"ContainerStarted","Data":"4b8e1a707d0d2fbe89c346ec8516e6f273574f9da3f65438872e78bd02425363"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.142468 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-smxkj" event={"ID":"0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1","Type":"ContainerStarted","Data":"3a285476c0789b4ea9c526e5d0207c6b66189706959df927d8829076975659f6"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.145055 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs" event={"ID":"003746e3-d80d-40d7-aac1-03bec863a85d","Type":"ContainerStarted","Data":"9b10bcdad978dec2a71bc911b5ad59c9792e0030f3999e557f18f9b230670471"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.149137 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4855s" event={"ID":"0436d85a-59ed-46ee-b01c-d82423c932b0","Type":"ContainerStarted","Data":"057a42a4bd5c7c2e3285db81b2df743a4141a3aa1ddb2bf3c3c519c00e1b4fff"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.149174 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4855s" event={"ID":"0436d85a-59ed-46ee-b01c-d82423c932b0","Type":"ContainerStarted","Data":"3d5f99251f8454698e970678117e2e8d3ef82458e4fcb5b54d1c67ded8d25f4b"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.149340 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-4855s" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.151599 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wx7m6" event={"ID":"31dcae62-0572-4873-b054-06b731401d8e","Type":"ContainerStarted","Data":"94f51864ab192b34c2e1ab3885ab0f2fa87923273a5028c2ef2cc494d38d4aa0"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.153151 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz" event={"ID":"cd1802c8-be73-40c4-b495-54eede995a32","Type":"ContainerStarted","Data":"144f56287fd72f93852a732e5b31cac3e159cb66afa842f4e023be980bcb9992"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.154785 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" event={"ID":"8ac750b0-4cbd-474b-a99c-1ddafb554107","Type":"ContainerStarted","Data":"126fbd970289490c8c47488750cb9f7aac80e451d77bae7d82877550afa4daba"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.155005 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.156638 4792 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-bt9c5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.156693 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" podUID="8ac750b0-4cbd-474b-a99c-1ddafb554107" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.156771 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" event={"ID":"07e202a3-1f47-409c-83f2-a066ddb1ffe2","Type":"ContainerStarted","Data":"a2afa8b8f89b8d445d72e5faae1ddf4353f83f9942c96979b0bd2313dc3184cd"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.160037 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" event={"ID":"d1a95901-32c9-465a-a8b6-e44c289beb03","Type":"ContainerStarted","Data":"f71bc1cf7064e2b15b2944af5e840302674482ced485ef7206f8a08814dd8810"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.160086 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" event={"ID":"d1a95901-32c9-465a-a8b6-e44c289beb03","Type":"ContainerStarted","Data":"28d935d1707178c8fffa8d5498a251a9fddcede34e4e5787b85845f0b714a217"} Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.160980 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-r96bc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.161037 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r96bc" podUID="a0015215-3f91-43fc-bbee-2560bb8f4c62" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.161357 4792 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7psgx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.161384 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" podUID="228abb37-ff66-48b3-a882-d67ca901a322" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.173280 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:04 crc kubenswrapper[4792]: E1127 17:12:04.173813 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:04.673799789 +0000 UTC m=+147.016626107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.179854 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.202986 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hdjtf" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.213814 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7b9r2" podStartSLOduration=124.213800205 podStartE2EDuration="2m4.213800205s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:04.158593428 +0000 UTC m=+146.501419746" watchObservedRunningTime="2025-11-27 17:12:04.213800205 +0000 UTC m=+146.556626523" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.247993 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-smxkj" podStartSLOduration=125.247978429 podStartE2EDuration="2m5.247978429s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:04.215048262 +0000 UTC m=+146.557874590" watchObservedRunningTime="2025-11-27 17:12:04.247978429 +0000 UTC m=+146.590804747" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.248481 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rr6k4" podStartSLOduration=125.248475804 podStartE2EDuration="2m5.248475804s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:04.247962059 +0000 UTC m=+146.590788377" watchObservedRunningTime="2025-11-27 17:12:04.248475804 +0000 UTC m=+146.591302122" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.275968 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:04 crc kubenswrapper[4792]: E1127 17:12:04.282484 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:04.782469622 +0000 UTC m=+147.125295940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.282525 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xk5xs" podStartSLOduration=124.282509003 podStartE2EDuration="2m4.282509003s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:04.281603206 +0000 UTC m=+146.624429524" watchObservedRunningTime="2025-11-27 17:12:04.282509003 +0000 UTC m=+146.625335321" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.379142 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:04 crc kubenswrapper[4792]: E1127 17:12:04.379603 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:04.879584802 +0000 UTC m=+147.222411120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.414042 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" podStartSLOduration=124.414027724 podStartE2EDuration="2m4.414027724s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:04.375987666 +0000 UTC m=+146.718813984" watchObservedRunningTime="2025-11-27 17:12:04.414027724 +0000 UTC m=+146.756854042" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.414707 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-t8xxz" podStartSLOduration=124.414701364 podStartE2EDuration="2m4.414701364s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:04.412832078 +0000 UTC m=+146.755658396" watchObservedRunningTime="2025-11-27 17:12:04.414701364 +0000 UTC m=+146.757527682" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.436315 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd" podStartSLOduration=124.436296914 podStartE2EDuration="2m4.436296914s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:04.430376389 +0000 UTC m=+146.773202707" watchObservedRunningTime="2025-11-27 17:12:04.436296914 +0000 UTC m=+146.779123232" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.480299 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:04 crc kubenswrapper[4792]: E1127 17:12:04.480702 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:04.98068382 +0000 UTC m=+147.323510138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.490153 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-4855s" podStartSLOduration=7.49013541 podStartE2EDuration="7.49013541s" podCreationTimestamp="2025-11-27 17:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:04.456863074 +0000 UTC m=+146.799689392" watchObservedRunningTime="2025-11-27 17:12:04.49013541 +0000 UTC m=+146.832961728" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.518488 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8hf2h" podStartSLOduration=124.518474561 podStartE2EDuration="2m4.518474561s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:04.517201523 +0000 UTC m=+146.860027841" watchObservedRunningTime="2025-11-27 17:12:04.518474561 +0000 UTC m=+146.861300879" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.519567 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" podStartSLOduration=124.519559223 podStartE2EDuration="2m4.519559223s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:04.491579223 +0000 UTC m=+146.834405541" watchObservedRunningTime="2025-11-27 17:12:04.519559223 +0000 UTC m=+146.862385541" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.572949 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-5jcbf" podStartSLOduration=124.572930866 podStartE2EDuration="2m4.572930866s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:04.570353609 +0000 UTC m=+146.913179927" watchObservedRunningTime="2025-11-27 17:12:04.572930866 +0000 UTC m=+146.915757184" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.581619 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:04 crc kubenswrapper[4792]: E1127 17:12:04.581787 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:05.081761128 +0000 UTC m=+147.424587446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.582133 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:04 crc kubenswrapper[4792]: E1127 17:12:04.582454 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:05.082447618 +0000 UTC m=+147.425273936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.601960 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fw9ts" podStartSLOduration=124.601944696 podStartE2EDuration="2m4.601944696s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:04.600120992 +0000 UTC m=+146.942947310" watchObservedRunningTime="2025-11-27 17:12:04.601944696 +0000 UTC m=+146.944771014" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.683491 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:04 crc kubenswrapper[4792]: E1127 17:12:04.683742 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:05.183716492 +0000 UTC m=+147.526542810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.684019 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:04 crc kubenswrapper[4792]: E1127 17:12:04.684343 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:05.18433334 +0000 UTC m=+147.527159658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.753804 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.753854 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.785504 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:04 crc kubenswrapper[4792]: E1127 17:12:04.785941 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:05.285926753 +0000 UTC m=+147.628753071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.886798 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:04 crc kubenswrapper[4792]: E1127 17:12:04.887199 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:05.387180196 +0000 UTC m=+147.730006514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.937734 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.937805 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.988022 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:04 crc kubenswrapper[4792]: E1127 17:12:04.988191 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:05.488167621 +0000 UTC m=+147.830993939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:04 crc kubenswrapper[4792]: I1127 17:12:04.988339 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:04 crc kubenswrapper[4792]: E1127 17:12:04.988617 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:05.488605654 +0000 UTC m=+147.831431972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.027876 4792 patch_prober.go:28] interesting pod/router-default-5444994796-qgvjw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 27 17:12:05 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Nov 27 17:12:05 crc kubenswrapper[4792]: [+]process-running ok Nov 27 17:12:05 crc kubenswrapper[4792]: healthz check failed Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.027945 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgvjw" podUID="101bdb1c-75a1-4d92-90e9-360cece56c1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.089762 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:05 crc kubenswrapper[4792]: E1127 17:12:05.090102 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:05.590084884 +0000 UTC m=+147.932911222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.161963 4792 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-w7lbb container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.162015 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" podUID="bc302df1-2d39-41a5-be37-eaccc5cde214" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.173524 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" event={"ID":"98b6db89-c5c7-4ec3-90dd-013390b75f20","Type":"ContainerStarted","Data":"293f73f11e15f7fac8231b5b9601feb57314d0f70806829b06ee8e316ab2af56"} Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.173784 4792 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7psgx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.174103 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" podUID="228abb37-ff66-48b3-a882-d67ca901a322" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.191725 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:05 crc kubenswrapper[4792]: E1127 17:12:05.192196 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:05.692182862 +0000 UTC m=+148.035009180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.194794 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bt9c5" Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.295135 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:05 crc kubenswrapper[4792]: E1127 17:12:05.296454 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:05.796440624 +0000 UTC m=+148.139266942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.310828 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w7lbb" Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.400258 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:05 crc kubenswrapper[4792]: E1127 17:12:05.400625 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:05.900614604 +0000 UTC m=+148.243440922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.500983 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:05 crc kubenswrapper[4792]: E1127 17:12:05.501440 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.001422134 +0000 UTC m=+148.344248452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.602737 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:05 crc kubenswrapper[4792]: E1127 17:12:05.603094 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.103074919 +0000 UTC m=+148.445901237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.646983 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.704015 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:05 crc kubenswrapper[4792]: E1127 17:12:05.704227 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.204197278 +0000 UTC m=+148.547023606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.704295 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:05 crc kubenswrapper[4792]: E1127 17:12:05.704598 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.204586049 +0000 UTC m=+148.547412367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.805313 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:05 crc kubenswrapper[4792]: E1127 17:12:05.805489 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.305465321 +0000 UTC m=+148.648291639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.805870 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:05 crc kubenswrapper[4792]: E1127 17:12:05.806211 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.306201073 +0000 UTC m=+148.649027391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.906405 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:05 crc kubenswrapper[4792]: E1127 17:12:05.906798 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.406783546 +0000 UTC m=+148.749609864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.972139 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c8djv"] Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.973124 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:12:05 crc kubenswrapper[4792]: I1127 17:12:05.977274 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.006397 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c8djv"] Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.008165 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:06 crc kubenswrapper[4792]: E1127 17:12:06.008538 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.508519844 +0000 UTC m=+148.851346252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.032061 4792 patch_prober.go:28] interesting pod/router-default-5444994796-qgvjw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 27 17:12:06 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Nov 27 17:12:06 crc kubenswrapper[4792]: [+]process-running ok Nov 27 17:12:06 crc kubenswrapper[4792]: healthz check failed Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.032119 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgvjw" podUID="101bdb1c-75a1-4d92-90e9-360cece56c1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.109047 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:06 crc kubenswrapper[4792]: E1127 17:12:06.109221 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.60919488 +0000 UTC m=+148.952021198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.109328 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aacc646-59f3-41fe-b59b-ce5fed81861f-catalog-content\") pod \"certified-operators-c8djv\" (UID: \"0aacc646-59f3-41fe-b59b-ce5fed81861f\") " pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.109569 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aacc646-59f3-41fe-b59b-ce5fed81861f-utilities\") pod \"certified-operators-c8djv\" (UID: \"0aacc646-59f3-41fe-b59b-ce5fed81861f\") " pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.109600 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v45k\" (UniqueName: \"kubernetes.io/projected/0aacc646-59f3-41fe-b59b-ce5fed81861f-kube-api-access-9v45k\") pod \"certified-operators-c8djv\" (UID: \"0aacc646-59f3-41fe-b59b-ce5fed81861f\") " pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.109694 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:06 crc kubenswrapper[4792]: E1127 17:12:06.109956 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.609943852 +0000 UTC m=+148.952770170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.129260 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tqrlt"] Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.130144 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.132670 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.157698 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tqrlt"] Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.180184 4792 generic.go:334] "Generic (PLEG): container finished" podID="07e202a3-1f47-409c-83f2-a066ddb1ffe2" containerID="a2afa8b8f89b8d445d72e5faae1ddf4353f83f9942c96979b0bd2313dc3184cd" exitCode=0 Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.180250 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" event={"ID":"07e202a3-1f47-409c-83f2-a066ddb1ffe2","Type":"ContainerDied","Data":"a2afa8b8f89b8d445d72e5faae1ddf4353f83f9942c96979b0bd2313dc3184cd"} Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.182636 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" event={"ID":"98b6db89-c5c7-4ec3-90dd-013390b75f20","Type":"ContainerStarted","Data":"863a26da7f4e153875154c62a0bbb77799afadaa2a2c9f8119f1fe30a83f2930"} Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.182705 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" event={"ID":"98b6db89-c5c7-4ec3-90dd-013390b75f20","Type":"ContainerStarted","Data":"0d1ab19696d301e0ba3c9f5b9c747709ab678146fc86b27fd098cc0216f07ecd"} Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.193934 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bql6w" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.211134 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.211314 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v45k\" (UniqueName: \"kubernetes.io/projected/0aacc646-59f3-41fe-b59b-ce5fed81861f-kube-api-access-9v45k\") pod \"certified-operators-c8djv\" (UID: \"0aacc646-59f3-41fe-b59b-ce5fed81861f\") " pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:12:06 crc kubenswrapper[4792]: E1127 17:12:06.211383 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.711338929 +0000 UTC m=+149.054165237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.211625 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqz5z\" (UniqueName: \"kubernetes.io/projected/bfe5d9aa-13db-4750-b440-dc1e83e149f0-kube-api-access-lqz5z\") pod \"community-operators-tqrlt\" (UID: \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\") " pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.211727 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.211749 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe5d9aa-13db-4750-b440-dc1e83e149f0-catalog-content\") pod \"community-operators-tqrlt\" (UID: \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\") " pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.211826 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aacc646-59f3-41fe-b59b-ce5fed81861f-catalog-content\") pod \"certified-operators-c8djv\" (UID: \"0aacc646-59f3-41fe-b59b-ce5fed81861f\") " pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.211843 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe5d9aa-13db-4750-b440-dc1e83e149f0-utilities\") pod \"community-operators-tqrlt\" (UID: \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\") " pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.211904 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aacc646-59f3-41fe-b59b-ce5fed81861f-utilities\") pod \"certified-operators-c8djv\" (UID: \"0aacc646-59f3-41fe-b59b-ce5fed81861f\") " pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.212313 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aacc646-59f3-41fe-b59b-ce5fed81861f-catalog-content\") pod \"certified-operators-c8djv\" (UID: \"0aacc646-59f3-41fe-b59b-ce5fed81861f\") " pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.212335 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aacc646-59f3-41fe-b59b-ce5fed81861f-utilities\") pod \"certified-operators-c8djv\" (UID: \"0aacc646-59f3-41fe-b59b-ce5fed81861f\") " pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:12:06 crc kubenswrapper[4792]: E1127 17:12:06.212532 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.712524264 +0000 UTC m=+149.055350582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.265254 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v45k\" (UniqueName: \"kubernetes.io/projected/0aacc646-59f3-41fe-b59b-ce5fed81861f-kube-api-access-9v45k\") pod \"certified-operators-c8djv\" (UID: \"0aacc646-59f3-41fe-b59b-ce5fed81861f\") " pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.285876 4792 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.286976 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.313246 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.313518 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqz5z\" (UniqueName: \"kubernetes.io/projected/bfe5d9aa-13db-4750-b440-dc1e83e149f0-kube-api-access-lqz5z\") pod \"community-operators-tqrlt\" (UID: \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\") " pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.313752 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe5d9aa-13db-4750-b440-dc1e83e149f0-catalog-content\") pod \"community-operators-tqrlt\" (UID: \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\") " pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.313854 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe5d9aa-13db-4750-b440-dc1e83e149f0-utilities\") pod \"community-operators-tqrlt\" (UID: \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\") " pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:12:06 crc kubenswrapper[4792]: E1127 17:12:06.314426 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.814408516 +0000 UTC m=+149.157234834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.316480 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe5d9aa-13db-4750-b440-dc1e83e149f0-catalog-content\") pod \"community-operators-tqrlt\" (UID: \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\") " pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.317903 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe5d9aa-13db-4750-b440-dc1e83e149f0-utilities\") pod \"community-operators-tqrlt\" (UID: \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\") " pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.330746 4792 patch_prober.go:28] interesting pod/apiserver-76f77b778f-smxkj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 27 17:12:06 crc kubenswrapper[4792]: [+]log ok Nov 27 17:12:06 crc kubenswrapper[4792]: [+]etcd ok Nov 27 17:12:06 crc kubenswrapper[4792]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 27 17:12:06 crc kubenswrapper[4792]: [+]poststarthook/generic-apiserver-start-informers ok Nov 27 17:12:06 crc kubenswrapper[4792]: [+]poststarthook/max-in-flight-filter ok Nov 27 17:12:06 crc kubenswrapper[4792]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 27 17:12:06 crc kubenswrapper[4792]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 27 17:12:06 crc kubenswrapper[4792]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 27 17:12:06 crc kubenswrapper[4792]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 27 17:12:06 crc kubenswrapper[4792]: [+]poststarthook/project.openshift.io-projectcache ok Nov 27 17:12:06 crc kubenswrapper[4792]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 27 17:12:06 crc kubenswrapper[4792]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Nov 27 17:12:06 crc kubenswrapper[4792]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 27 17:12:06 crc kubenswrapper[4792]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 27 17:12:06 crc kubenswrapper[4792]: livez check failed Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.330810 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-smxkj" podUID="0587b0a0-34b8-4e9a-a0e2-1279ed1c65a1" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.361187 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-fz8kn" podStartSLOduration=9.361165123 podStartE2EDuration="9.361165123s" podCreationTimestamp="2025-11-27 17:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:06.336378858 +0000 UTC m=+148.679205176" watchObservedRunningTime="2025-11-27 17:12:06.361165123 +0000 UTC m=+148.703991441" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.367630 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqz5z\" (UniqueName: \"kubernetes.io/projected/bfe5d9aa-13db-4750-b440-dc1e83e149f0-kube-api-access-lqz5z\") pod \"community-operators-tqrlt\" (UID: \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\") " pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.375555 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d57cd"] Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.382119 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.386828 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d57cd"] Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.415485 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:06 crc kubenswrapper[4792]: E1127 17:12:06.415826 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:06.915814494 +0000 UTC m=+149.258640812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.445005 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.516113 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.516424 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9d4ac25-3b3d-44a2-b384-0cc385078f80-catalog-content\") pod \"certified-operators-d57cd\" (UID: \"a9d4ac25-3b3d-44a2-b384-0cc385078f80\") " pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.516491 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk6q9\" (UniqueName: \"kubernetes.io/projected/a9d4ac25-3b3d-44a2-b384-0cc385078f80-kube-api-access-sk6q9\") pod \"certified-operators-d57cd\" (UID: \"a9d4ac25-3b3d-44a2-b384-0cc385078f80\") " pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.516553 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9d4ac25-3b3d-44a2-b384-0cc385078f80-utilities\") pod \"certified-operators-d57cd\" (UID: \"a9d4ac25-3b3d-44a2-b384-0cc385078f80\") " pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:12:06 crc kubenswrapper[4792]: E1127 17:12:06.516673 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:07.016657715 +0000 UTC m=+149.359484033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.545784 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6p66q"] Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.547518 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.567115 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6p66q"] Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.617329 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.617372 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9d4ac25-3b3d-44a2-b384-0cc385078f80-utilities\") pod \"certified-operators-d57cd\" (UID: \"a9d4ac25-3b3d-44a2-b384-0cc385078f80\") " pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.617419 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9d4ac25-3b3d-44a2-b384-0cc385078f80-catalog-content\") pod \"certified-operators-d57cd\" (UID: \"a9d4ac25-3b3d-44a2-b384-0cc385078f80\") " pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.617455 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk6q9\" (UniqueName: \"kubernetes.io/projected/a9d4ac25-3b3d-44a2-b384-0cc385078f80-kube-api-access-sk6q9\") pod \"certified-operators-d57cd\" (UID: \"a9d4ac25-3b3d-44a2-b384-0cc385078f80\") " pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:12:06 crc kubenswrapper[4792]: E1127 17:12:06.617931 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:07.117920638 +0000 UTC m=+149.460746956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.618687 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9d4ac25-3b3d-44a2-b384-0cc385078f80-utilities\") pod \"certified-operators-d57cd\" (UID: \"a9d4ac25-3b3d-44a2-b384-0cc385078f80\") " pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.618886 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9d4ac25-3b3d-44a2-b384-0cc385078f80-catalog-content\") pod \"certified-operators-d57cd\" (UID: \"a9d4ac25-3b3d-44a2-b384-0cc385078f80\") " pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.646217 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk6q9\" (UniqueName: \"kubernetes.io/projected/a9d4ac25-3b3d-44a2-b384-0cc385078f80-kube-api-access-sk6q9\") pod \"certified-operators-d57cd\" (UID: \"a9d4ac25-3b3d-44a2-b384-0cc385078f80\") " pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.710159 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c8djv"] Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.718445 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.718609 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.718633 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.718668 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-utilities\") pod \"community-operators-6p66q\" (UID: \"6879b0d9-3b16-4233-a329-ed6fd9c58bd8\") " pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.718686 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-catalog-content\") pod \"community-operators-6p66q\" (UID: \"6879b0d9-3b16-4233-a329-ed6fd9c58bd8\") " pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.718712 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.718755 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qkkj\" (UniqueName: \"kubernetes.io/projected/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-kube-api-access-9qkkj\") pod \"community-operators-6p66q\" (UID: \"6879b0d9-3b16-4233-a329-ed6fd9c58bd8\") " pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.718786 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.721455 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:12:06 crc kubenswrapper[4792]: E1127 17:12:06.721623 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:07.221596263 +0000 UTC m=+149.564422591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.725248 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.725485 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.726070 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.741949 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.803444 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.820312 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qkkj\" (UniqueName: \"kubernetes.io/projected/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-kube-api-access-9qkkj\") pod \"community-operators-6p66q\" (UID: \"6879b0d9-3b16-4233-a329-ed6fd9c58bd8\") " pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.820349 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.820391 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-utilities\") pod \"community-operators-6p66q\" (UID: \"6879b0d9-3b16-4233-a329-ed6fd9c58bd8\") " pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.820406 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-catalog-content\") pod \"community-operators-6p66q\" (UID: \"6879b0d9-3b16-4233-a329-ed6fd9c58bd8\") " pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.820771 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-catalog-content\") pod \"community-operators-6p66q\" (UID: \"6879b0d9-3b16-4233-a329-ed6fd9c58bd8\") " pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.821210 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-utilities\") pod \"community-operators-6p66q\" (UID: \"6879b0d9-3b16-4233-a329-ed6fd9c58bd8\") " pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:12:06 crc kubenswrapper[4792]: E1127 17:12:06.821218 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:07.321199587 +0000 UTC m=+149.664025905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.832707 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.858251 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qkkj\" (UniqueName: \"kubernetes.io/projected/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-kube-api-access-9qkkj\") pod \"community-operators-6p66q\" (UID: \"6879b0d9-3b16-4233-a329-ed6fd9c58bd8\") " pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.875580 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tqrlt"] Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.875714 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.913912 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.915549 4792 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-27T17:12:06.286093656Z","Handler":null,"Name":""} Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.921230 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:06 crc kubenswrapper[4792]: E1127 17:12:06.921431 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:07.421403699 +0000 UTC m=+149.764230017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:06 crc kubenswrapper[4792]: I1127 17:12:06.921587 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:06 crc kubenswrapper[4792]: E1127 17:12:06.922023 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-27 17:12:07.422016637 +0000 UTC m=+149.764842955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ft9s4" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.023306 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:07 crc kubenswrapper[4792]: E1127 17:12:07.023564 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-27 17:12:07.523548949 +0000 UTC m=+149.866375267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.044741 4792 patch_prober.go:28] interesting pod/router-default-5444994796-qgvjw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 27 17:12:07 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Nov 27 17:12:07 crc kubenswrapper[4792]: [+]process-running ok Nov 27 17:12:07 crc kubenswrapper[4792]: healthz check failed Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.044791 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgvjw" podUID="101bdb1c-75a1-4d92-90e9-360cece56c1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.122078 4792 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.122117 4792 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.124172 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.133016 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.133083 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.195083 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ft9s4\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.221765 4792 generic.go:334] "Generic (PLEG): container finished" podID="0aacc646-59f3-41fe-b59b-ce5fed81861f" containerID="a80e8e6344e5c93558954624fb3af3080cd5af0f84b0349038e3bb54ac143e4e" exitCode=0 Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.221900 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8djv" event={"ID":"0aacc646-59f3-41fe-b59b-ce5fed81861f","Type":"ContainerDied","Data":"a80e8e6344e5c93558954624fb3af3080cd5af0f84b0349038e3bb54ac143e4e"} Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.221960 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8djv" event={"ID":"0aacc646-59f3-41fe-b59b-ce5fed81861f","Type":"ContainerStarted","Data":"87337299c9012429c5a44efd4c146e9eba6c188554f2d08da000c0e11a375513"} Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.229930 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.230754 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.240226 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqrlt" event={"ID":"bfe5d9aa-13db-4750-b440-dc1e83e149f0","Type":"ContainerStarted","Data":"d039a5bad769d7ee140ff1b8a23729cd58b67e2380dc1ac67aff5d651a54497c"} Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.254847 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.332093 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs\") pod \"network-metrics-daemon-5qmhg\" (UID: \"2ec75c0b-1943-49d4-8813-bf8cc5218511\") " pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.338935 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ec75c0b-1943-49d4-8813-bf8cc5218511-metrics-certs\") pod \"network-metrics-daemon-5qmhg\" (UID: \"2ec75c0b-1943-49d4-8813-bf8cc5218511\") " pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.419115 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qmhg" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.420760 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d57cd"] Nov 27 17:12:07 crc kubenswrapper[4792]: W1127 17:12:07.437826 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9d4ac25_3b3d_44a2_b384_0cc385078f80.slice/crio-eaae724b1a2954d2654a58961748ef48a7a67d8430badc487acd99918a6a5adb WatchSource:0}: Error finding container eaae724b1a2954d2654a58961748ef48a7a67d8430badc487acd99918a6a5adb: Status 404 returned error can't find the container with id eaae724b1a2954d2654a58961748ef48a7a67d8430badc487acd99918a6a5adb Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.473677 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6p66q"] Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.474448 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:07 crc kubenswrapper[4792]: W1127 17:12:07.494419 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6879b0d9_3b16_4233_a329_ed6fd9c58bd8.slice/crio-4a8fa6214b633f8e7edd42fe7255909961e363aff6fd4be87f98134f6318857e WatchSource:0}: Error finding container 4a8fa6214b633f8e7edd42fe7255909961e363aff6fd4be87f98134f6318857e: Status 404 returned error can't find the container with id 4a8fa6214b633f8e7edd42fe7255909961e363aff6fd4be87f98134f6318857e Nov 27 17:12:07 crc kubenswrapper[4792]: W1127 17:12:07.512374 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-61085aa181df7d8f49031446b367def136fd55f05f8da17d5de02cf5da943fd6 WatchSource:0}: Error finding container 61085aa181df7d8f49031446b367def136fd55f05f8da17d5de02cf5da943fd6: Status 404 returned error can't find the container with id 61085aa181df7d8f49031446b367def136fd55f05f8da17d5de02cf5da943fd6 Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.631565 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.738493 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07e202a3-1f47-409c-83f2-a066ddb1ffe2-config-volume\") pod \"07e202a3-1f47-409c-83f2-a066ddb1ffe2\" (UID: \"07e202a3-1f47-409c-83f2-a066ddb1ffe2\") " Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.738883 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07e202a3-1f47-409c-83f2-a066ddb1ffe2-secret-volume\") pod \"07e202a3-1f47-409c-83f2-a066ddb1ffe2\" (UID: \"07e202a3-1f47-409c-83f2-a066ddb1ffe2\") " Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.738918 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm5bp\" (UniqueName: \"kubernetes.io/projected/07e202a3-1f47-409c-83f2-a066ddb1ffe2-kube-api-access-tm5bp\") pod \"07e202a3-1f47-409c-83f2-a066ddb1ffe2\" (UID: \"07e202a3-1f47-409c-83f2-a066ddb1ffe2\") " Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.739757 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e202a3-1f47-409c-83f2-a066ddb1ffe2-config-volume" (OuterVolumeSpecName: "config-volume") pod "07e202a3-1f47-409c-83f2-a066ddb1ffe2" (UID: "07e202a3-1f47-409c-83f2-a066ddb1ffe2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.749848 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e202a3-1f47-409c-83f2-a066ddb1ffe2-kube-api-access-tm5bp" (OuterVolumeSpecName: "kube-api-access-tm5bp") pod "07e202a3-1f47-409c-83f2-a066ddb1ffe2" (UID: "07e202a3-1f47-409c-83f2-a066ddb1ffe2"). InnerVolumeSpecName "kube-api-access-tm5bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.749988 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e202a3-1f47-409c-83f2-a066ddb1ffe2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "07e202a3-1f47-409c-83f2-a066ddb1ffe2" (UID: "07e202a3-1f47-409c-83f2-a066ddb1ffe2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.797688 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ft9s4"] Nov 27 17:12:07 crc kubenswrapper[4792]: W1127 17:12:07.805264 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbecf7050_f3f8_42a3_bf02_cf9347e493e6.slice/crio-58b9d8da0197d7503bf038d6572e25c2bc9afabb893cfd6d46ad6bd2480c1b82 WatchSource:0}: Error finding container 58b9d8da0197d7503bf038d6572e25c2bc9afabb893cfd6d46ad6bd2480c1b82: Status 404 returned error can't find the container with id 58b9d8da0197d7503bf038d6572e25c2bc9afabb893cfd6d46ad6bd2480c1b82 Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.840211 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/07e202a3-1f47-409c-83f2-a066ddb1ffe2-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.840241 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm5bp\" (UniqueName: \"kubernetes.io/projected/07e202a3-1f47-409c-83f2-a066ddb1ffe2-kube-api-access-tm5bp\") on node \"crc\" DevicePath \"\"" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.840251 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07e202a3-1f47-409c-83f2-a066ddb1ffe2-config-volume\") on node \"crc\" DevicePath \"\"" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.914343 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8kfjw"] Nov 27 17:12:07 crc kubenswrapper[4792]: E1127 17:12:07.916692 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e202a3-1f47-409c-83f2-a066ddb1ffe2" containerName="collect-profiles" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.916716 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e202a3-1f47-409c-83f2-a066ddb1ffe2" containerName="collect-profiles" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.916908 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e202a3-1f47-409c-83f2-a066ddb1ffe2" containerName="collect-profiles" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.917635 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.919835 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.923283 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kfjw"] Nov 27 17:12:07 crc kubenswrapper[4792]: I1127 17:12:07.934404 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5qmhg"] Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.028867 4792 patch_prober.go:28] interesting pod/router-default-5444994796-qgvjw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 27 17:12:08 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Nov 27 17:12:08 crc kubenswrapper[4792]: [+]process-running ok Nov 27 17:12:08 crc kubenswrapper[4792]: healthz check failed Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.028967 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgvjw" podUID="101bdb1c-75a1-4d92-90e9-360cece56c1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.042037 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28aedb43-5391-4868-b839-54e2857d62c7-utilities\") pod \"redhat-marketplace-8kfjw\" (UID: \"28aedb43-5391-4868-b839-54e2857d62c7\") " pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.042098 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28aedb43-5391-4868-b839-54e2857d62c7-catalog-content\") pod \"redhat-marketplace-8kfjw\" (UID: \"28aedb43-5391-4868-b839-54e2857d62c7\") " pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.042145 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp5bb\" (UniqueName: \"kubernetes.io/projected/28aedb43-5391-4868-b839-54e2857d62c7-kube-api-access-lp5bb\") pod \"redhat-marketplace-8kfjw\" (UID: \"28aedb43-5391-4868-b839-54e2857d62c7\") " pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.143168 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28aedb43-5391-4868-b839-54e2857d62c7-utilities\") pod \"redhat-marketplace-8kfjw\" (UID: \"28aedb43-5391-4868-b839-54e2857d62c7\") " pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.143219 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28aedb43-5391-4868-b839-54e2857d62c7-catalog-content\") pod \"redhat-marketplace-8kfjw\" (UID: \"28aedb43-5391-4868-b839-54e2857d62c7\") " pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.143260 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp5bb\" (UniqueName: \"kubernetes.io/projected/28aedb43-5391-4868-b839-54e2857d62c7-kube-api-access-lp5bb\") pod \"redhat-marketplace-8kfjw\" (UID: \"28aedb43-5391-4868-b839-54e2857d62c7\") " pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.144087 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28aedb43-5391-4868-b839-54e2857d62c7-utilities\") pod \"redhat-marketplace-8kfjw\" (UID: \"28aedb43-5391-4868-b839-54e2857d62c7\") " pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.144431 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28aedb43-5391-4868-b839-54e2857d62c7-catalog-content\") pod \"redhat-marketplace-8kfjw\" (UID: \"28aedb43-5391-4868-b839-54e2857d62c7\") " pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.161151 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp5bb\" (UniqueName: \"kubernetes.io/projected/28aedb43-5391-4868-b839-54e2857d62c7-kube-api-access-lp5bb\") pod \"redhat-marketplace-8kfjw\" (UID: \"28aedb43-5391-4868-b839-54e2857d62c7\") " pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.240917 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.275121 4792 generic.go:334] "Generic (PLEG): container finished" podID="6879b0d9-3b16-4233-a329-ed6fd9c58bd8" containerID="702f97a143d376780da8086c403115f0711184b9fc927d31e67982f1036f0dc1" exitCode=0 Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.275261 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6p66q" event={"ID":"6879b0d9-3b16-4233-a329-ed6fd9c58bd8","Type":"ContainerDied","Data":"702f97a143d376780da8086c403115f0711184b9fc927d31e67982f1036f0dc1"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.275340 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6p66q" event={"ID":"6879b0d9-3b16-4233-a329-ed6fd9c58bd8","Type":"ContainerStarted","Data":"4a8fa6214b633f8e7edd42fe7255909961e363aff6fd4be87f98134f6318857e"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.290348 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.290540 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.298795 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" event={"ID":"07e202a3-1f47-409c-83f2-a066ddb1ffe2","Type":"ContainerDied","Data":"1e323dc74ba22df4788dbdfc52627d6e291317dee499d83bddbcd9aa5f895342"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.298843 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e323dc74ba22df4788dbdfc52627d6e291317dee499d83bddbcd9aa5f895342" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.298943 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.318189 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-26frg"] Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.319292 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.322004 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"94935fb7163009df3f395e2700a6cf910a632f9cfcbc5028c74b791cadb1c9d7"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.322061 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"45ca54f4ec25de1a000ad3df5504211c6cd4ff3e4cb1e64d2905154fbc0a9832"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.322299 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.328466 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9d4ac25-3b3d-44a2-b384-0cc385078f80" containerID="f2d0867b4f7f2c1c5f6d1db22d8984e09b9abe2a6e4bf56a4d082b9395cf3682" exitCode=0 Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.328584 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d57cd" event={"ID":"a9d4ac25-3b3d-44a2-b384-0cc385078f80","Type":"ContainerDied","Data":"f2d0867b4f7f2c1c5f6d1db22d8984e09b9abe2a6e4bf56a4d082b9395cf3682"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.328616 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d57cd" event={"ID":"a9d4ac25-3b3d-44a2-b384-0cc385078f80","Type":"ContainerStarted","Data":"eaae724b1a2954d2654a58961748ef48a7a67d8430badc487acd99918a6a5adb"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.336731 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"74d4195bf0c20952ae7bca07567946cf17c60a4c1c265477dc51f17b997a6eab"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.336802 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"61085aa181df7d8f49031446b367def136fd55f05f8da17d5de02cf5da943fd6"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.343236 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-26frg"] Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.348951 4792 generic.go:334] "Generic (PLEG): container finished" podID="bfe5d9aa-13db-4750-b440-dc1e83e149f0" containerID="d3db912fd52e9f6c9fae2e1025e3edeee402374c4fd82196d9047233f8dcff37" exitCode=0 Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.349004 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqrlt" event={"ID":"bfe5d9aa-13db-4750-b440-dc1e83e149f0","Type":"ContainerDied","Data":"d3db912fd52e9f6c9fae2e1025e3edeee402374c4fd82196d9047233f8dcff37"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.356395 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" event={"ID":"2ec75c0b-1943-49d4-8813-bf8cc5218511","Type":"ContainerStarted","Data":"5b8085bb1e9e42720f2ba658e2e4ef5f7678875a5b291d711bb7ba696e6eba6b"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.356435 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" event={"ID":"2ec75c0b-1943-49d4-8813-bf8cc5218511","Type":"ContainerStarted","Data":"d8aa43622f6f1540c442db61ba0a3328b661f16da8ee4eec0329ad43fc998a4a"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.360465 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d92b9a8133da490039b190fe5cc09012719e5e6b5edf520860196a4139e296a4"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.360853 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"af7eec0ab125ad33dd954307625e7d25c28437d29242fa0fe6929f7b3636d95e"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.374192 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" event={"ID":"becf7050-f3f8-42a3-bf02-cf9347e493e6","Type":"ContainerStarted","Data":"cc35ee17afe79a4926329533bf444b4f93481c5717f95194f53cc799bd6bba52"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.374225 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" event={"ID":"becf7050-f3f8-42a3-bf02-cf9347e493e6","Type":"ContainerStarted","Data":"58b9d8da0197d7503bf038d6572e25c2bc9afabb893cfd6d46ad6bd2480c1b82"} Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.374237 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.446814 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdgr7\" (UniqueName: \"kubernetes.io/projected/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-kube-api-access-pdgr7\") pod \"redhat-marketplace-26frg\" (UID: \"8a2b6ebf-ba92-4d93-b98c-8df09cea427e\") " pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.446993 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-catalog-content\") pod \"redhat-marketplace-26frg\" (UID: \"8a2b6ebf-ba92-4d93-b98c-8df09cea427e\") " pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.447053 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-utilities\") pod \"redhat-marketplace-26frg\" (UID: \"8a2b6ebf-ba92-4d93-b98c-8df09cea427e\") " pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.549617 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-utilities\") pod \"redhat-marketplace-26frg\" (UID: \"8a2b6ebf-ba92-4d93-b98c-8df09cea427e\") " pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.549757 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdgr7\" (UniqueName: \"kubernetes.io/projected/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-kube-api-access-pdgr7\") pod \"redhat-marketplace-26frg\" (UID: \"8a2b6ebf-ba92-4d93-b98c-8df09cea427e\") " pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.549821 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-catalog-content\") pod \"redhat-marketplace-26frg\" (UID: \"8a2b6ebf-ba92-4d93-b98c-8df09cea427e\") " pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.550282 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-catalog-content\") pod \"redhat-marketplace-26frg\" (UID: \"8a2b6ebf-ba92-4d93-b98c-8df09cea427e\") " pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.550472 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-utilities\") pod \"redhat-marketplace-26frg\" (UID: \"8a2b6ebf-ba92-4d93-b98c-8df09cea427e\") " pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.578960 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdgr7\" (UniqueName: \"kubernetes.io/projected/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-kube-api-access-pdgr7\") pod \"redhat-marketplace-26frg\" (UID: \"8a2b6ebf-ba92-4d93-b98c-8df09cea427e\") " pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.627332 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" podStartSLOduration=128.627309933 podStartE2EDuration="2m8.627309933s" podCreationTimestamp="2025-11-27 17:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:08.521559967 +0000 UTC m=+150.864386305" watchObservedRunningTime="2025-11-27 17:12:08.627309933 +0000 UTC m=+150.970136261" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.628984 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kfjw"] Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.673413 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.705959 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 27 17:12:08 crc kubenswrapper[4792]: I1127 17:12:08.927542 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-26frg"] Nov 27 17:12:08 crc kubenswrapper[4792]: W1127 17:12:08.943759 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a2b6ebf_ba92_4d93_b98c_8df09cea427e.slice/crio-6a2fcdfe99d191adcb61730e4cfda55f6bf5d3e009838a1a7273a714c723ff03 WatchSource:0}: Error finding container 6a2fcdfe99d191adcb61730e4cfda55f6bf5d3e009838a1a7273a714c723ff03: Status 404 returned error can't find the container with id 6a2fcdfe99d191adcb61730e4cfda55f6bf5d3e009838a1a7273a714c723ff03 Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.033485 4792 patch_prober.go:28] interesting pod/router-default-5444994796-qgvjw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 27 17:12:09 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Nov 27 17:12:09 crc kubenswrapper[4792]: [+]process-running ok Nov 27 17:12:09 crc kubenswrapper[4792]: healthz check failed Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.033537 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgvjw" podUID="101bdb1c-75a1-4d92-90e9-360cece56c1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.090071 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.091050 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.092579 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.092816 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.093065 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.157250 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0b38dea-efc2-47a8-b840-2b00d2d04e6c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d0b38dea-efc2-47a8-b840-2b00d2d04e6c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.157291 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0b38dea-efc2-47a8-b840-2b00d2d04e6c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d0b38dea-efc2-47a8-b840-2b00d2d04e6c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.258655 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0b38dea-efc2-47a8-b840-2b00d2d04e6c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d0b38dea-efc2-47a8-b840-2b00d2d04e6c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.258722 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0b38dea-efc2-47a8-b840-2b00d2d04e6c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d0b38dea-efc2-47a8-b840-2b00d2d04e6c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.258868 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0b38dea-efc2-47a8-b840-2b00d2d04e6c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d0b38dea-efc2-47a8-b840-2b00d2d04e6c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.279502 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0b38dea-efc2-47a8-b840-2b00d2d04e6c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d0b38dea-efc2-47a8-b840-2b00d2d04e6c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.311667 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kt9ct"] Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.312986 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.314552 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.318857 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kt9ct"] Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.380531 4792 generic.go:334] "Generic (PLEG): container finished" podID="8a2b6ebf-ba92-4d93-b98c-8df09cea427e" containerID="d7f7e76a7fddf081f8688c3ec8facdcbdfee927fb692e29edbb80c9d7327a008" exitCode=0 Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.380622 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26frg" event={"ID":"8a2b6ebf-ba92-4d93-b98c-8df09cea427e","Type":"ContainerDied","Data":"d7f7e76a7fddf081f8688c3ec8facdcbdfee927fb692e29edbb80c9d7327a008"} Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.380679 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26frg" event={"ID":"8a2b6ebf-ba92-4d93-b98c-8df09cea427e","Type":"ContainerStarted","Data":"6a2fcdfe99d191adcb61730e4cfda55f6bf5d3e009838a1a7273a714c723ff03"} Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.386589 4792 generic.go:334] "Generic (PLEG): container finished" podID="28aedb43-5391-4868-b839-54e2857d62c7" containerID="87e9295e4e3bcc62b762b3a76d0c30970089709dae92380fae80c102d079a7d3" exitCode=0 Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.386679 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kfjw" event={"ID":"28aedb43-5391-4868-b839-54e2857d62c7","Type":"ContainerDied","Data":"87e9295e4e3bcc62b762b3a76d0c30970089709dae92380fae80c102d079a7d3"} Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.386704 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kfjw" event={"ID":"28aedb43-5391-4868-b839-54e2857d62c7","Type":"ContainerStarted","Data":"d40171afb4968c78361d637012b466dc67e81a587304f68eaa47dedd08eabd69"} Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.391682 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5qmhg" event={"ID":"2ec75c0b-1943-49d4-8813-bf8cc5218511","Type":"ContainerStarted","Data":"8a17530a619ceff771ebf5e480826f39b0db987415110b4e8097658eb1b24cbf"} Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.419507 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.448674 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5qmhg" podStartSLOduration=130.448659854 podStartE2EDuration="2m10.448659854s" podCreationTimestamp="2025-11-27 17:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:09.445108619 +0000 UTC m=+151.787934937" watchObservedRunningTime="2025-11-27 17:12:09.448659854 +0000 UTC m=+151.791486172" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.462199 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpmcr\" (UniqueName: \"kubernetes.io/projected/a53d4e7e-b60e-4c7b-91ce-f025197188d8-kube-api-access-vpmcr\") pod \"redhat-operators-kt9ct\" (UID: \"a53d4e7e-b60e-4c7b-91ce-f025197188d8\") " pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.462350 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a53d4e7e-b60e-4c7b-91ce-f025197188d8-utilities\") pod \"redhat-operators-kt9ct\" (UID: \"a53d4e7e-b60e-4c7b-91ce-f025197188d8\") " pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.463201 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a53d4e7e-b60e-4c7b-91ce-f025197188d8-catalog-content\") pod \"redhat-operators-kt9ct\" (UID: \"a53d4e7e-b60e-4c7b-91ce-f025197188d8\") " pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.566286 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a53d4e7e-b60e-4c7b-91ce-f025197188d8-catalog-content\") pod \"redhat-operators-kt9ct\" (UID: \"a53d4e7e-b60e-4c7b-91ce-f025197188d8\") " pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.566351 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpmcr\" (UniqueName: \"kubernetes.io/projected/a53d4e7e-b60e-4c7b-91ce-f025197188d8-kube-api-access-vpmcr\") pod \"redhat-operators-kt9ct\" (UID: \"a53d4e7e-b60e-4c7b-91ce-f025197188d8\") " pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.566426 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a53d4e7e-b60e-4c7b-91ce-f025197188d8-utilities\") pod \"redhat-operators-kt9ct\" (UID: \"a53d4e7e-b60e-4c7b-91ce-f025197188d8\") " pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.566996 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a53d4e7e-b60e-4c7b-91ce-f025197188d8-utilities\") pod \"redhat-operators-kt9ct\" (UID: \"a53d4e7e-b60e-4c7b-91ce-f025197188d8\") " pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.567039 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a53d4e7e-b60e-4c7b-91ce-f025197188d8-catalog-content\") pod \"redhat-operators-kt9ct\" (UID: \"a53d4e7e-b60e-4c7b-91ce-f025197188d8\") " pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.596789 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpmcr\" (UniqueName: \"kubernetes.io/projected/a53d4e7e-b60e-4c7b-91ce-f025197188d8-kube-api-access-vpmcr\") pod \"redhat-operators-kt9ct\" (UID: \"a53d4e7e-b60e-4c7b-91ce-f025197188d8\") " pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.647205 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.654004 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.654113 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.664835 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-r96bc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.664910 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-r96bc" podUID="a0015215-3f91-43fc-bbee-2560bb8f4c62" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.665374 4792 patch_prober.go:28] interesting pod/downloads-7954f5f757-r96bc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.665393 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-r96bc" podUID="a0015215-3f91-43fc-bbee-2560bb8f4c62" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.670848 4792 patch_prober.go:28] interesting pod/console-f9d7485db-k86pd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.671174 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-k86pd" podUID="93d84de9-e75f-4127-b3ee-890375498dc3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.688742 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.714034 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-62nl2"] Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.715098 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:12:09 crc kubenswrapper[4792]: W1127 17:12:09.725570 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd0b38dea_efc2_47a8_b840_2b00d2d04e6c.slice/crio-21b2d2d881a4274f52c5a712ad9d7a3e58d7738fb76db674a739aedff30ac99b WatchSource:0}: Error finding container 21b2d2d881a4274f52c5a712ad9d7a3e58d7738fb76db674a739aedff30ac99b: Status 404 returned error can't find the container with id 21b2d2d881a4274f52c5a712ad9d7a3e58d7738fb76db674a739aedff30ac99b Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.725915 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-62nl2"] Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.759023 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.764173 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-smxkj" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.770960 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2vhn\" (UniqueName: \"kubernetes.io/projected/54d66969-c16d-43c7-adb3-64d00d1c451d-kube-api-access-d2vhn\") pod \"redhat-operators-62nl2\" (UID: \"54d66969-c16d-43c7-adb3-64d00d1c451d\") " pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.771043 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d66969-c16d-43c7-adb3-64d00d1c451d-catalog-content\") pod \"redhat-operators-62nl2\" (UID: \"54d66969-c16d-43c7-adb3-64d00d1c451d\") " pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.771104 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d66969-c16d-43c7-adb3-64d00d1c451d-utilities\") pod \"redhat-operators-62nl2\" (UID: \"54d66969-c16d-43c7-adb3-64d00d1c451d\") " pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.872174 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d66969-c16d-43c7-adb3-64d00d1c451d-catalog-content\") pod \"redhat-operators-62nl2\" (UID: \"54d66969-c16d-43c7-adb3-64d00d1c451d\") " pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.872252 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d66969-c16d-43c7-adb3-64d00d1c451d-utilities\") pod \"redhat-operators-62nl2\" (UID: \"54d66969-c16d-43c7-adb3-64d00d1c451d\") " pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.872390 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2vhn\" (UniqueName: \"kubernetes.io/projected/54d66969-c16d-43c7-adb3-64d00d1c451d-kube-api-access-d2vhn\") pod \"redhat-operators-62nl2\" (UID: \"54d66969-c16d-43c7-adb3-64d00d1c451d\") " pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.872815 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d66969-c16d-43c7-adb3-64d00d1c451d-catalog-content\") pod \"redhat-operators-62nl2\" (UID: \"54d66969-c16d-43c7-adb3-64d00d1c451d\") " pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.873430 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d66969-c16d-43c7-adb3-64d00d1c451d-utilities\") pod \"redhat-operators-62nl2\" (UID: \"54d66969-c16d-43c7-adb3-64d00d1c451d\") " pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:12:09 crc kubenswrapper[4792]: I1127 17:12:09.905125 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2vhn\" (UniqueName: \"kubernetes.io/projected/54d66969-c16d-43c7-adb3-64d00d1c451d-kube-api-access-d2vhn\") pod \"redhat-operators-62nl2\" (UID: \"54d66969-c16d-43c7-adb3-64d00d1c451d\") " pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:12:10 crc kubenswrapper[4792]: I1127 17:12:10.029223 4792 patch_prober.go:28] interesting pod/router-default-5444994796-qgvjw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 27 17:12:10 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Nov 27 17:12:10 crc kubenswrapper[4792]: [+]process-running ok Nov 27 17:12:10 crc kubenswrapper[4792]: healthz check failed Nov 27 17:12:10 crc kubenswrapper[4792]: I1127 17:12:10.029290 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgvjw" podUID="101bdb1c-75a1-4d92-90e9-360cece56c1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:10 crc kubenswrapper[4792]: I1127 17:12:10.030133 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:12:10 crc kubenswrapper[4792]: I1127 17:12:10.038137 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:12:10 crc kubenswrapper[4792]: I1127 17:12:10.084393 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" Nov 27 17:12:10 crc kubenswrapper[4792]: I1127 17:12:10.194119 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kt9ct"] Nov 27 17:12:10 crc kubenswrapper[4792]: W1127 17:12:10.212042 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda53d4e7e_b60e_4c7b_91ce_f025197188d8.slice/crio-6ed2d65a2c473aa09a37b393ec647158742258912e920cfece4fb39f9a88786e WatchSource:0}: Error finding container 6ed2d65a2c473aa09a37b393ec647158742258912e920cfece4fb39f9a88786e: Status 404 returned error can't find the container with id 6ed2d65a2c473aa09a37b393ec647158742258912e920cfece4fb39f9a88786e Nov 27 17:12:10 crc kubenswrapper[4792]: I1127 17:12:10.400768 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d0b38dea-efc2-47a8-b840-2b00d2d04e6c","Type":"ContainerStarted","Data":"21b2d2d881a4274f52c5a712ad9d7a3e58d7738fb76db674a739aedff30ac99b"} Nov 27 17:12:10 crc kubenswrapper[4792]: I1127 17:12:10.407107 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kt9ct" event={"ID":"a53d4e7e-b60e-4c7b-91ce-f025197188d8","Type":"ContainerStarted","Data":"6ed2d65a2c473aa09a37b393ec647158742258912e920cfece4fb39f9a88786e"} Nov 27 17:12:10 crc kubenswrapper[4792]: I1127 17:12:10.428193 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-62nl2"] Nov 27 17:12:10 crc kubenswrapper[4792]: W1127 17:12:10.439276 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54d66969_c16d_43c7_adb3_64d00d1c451d.slice/crio-de26b9ef1e8cd22c5730ab9b07f390a6e6f222aab66c64b7a98c1897f3af9c60 WatchSource:0}: Error finding container de26b9ef1e8cd22c5730ab9b07f390a6e6f222aab66c64b7a98c1897f3af9c60: Status 404 returned error can't find the container with id de26b9ef1e8cd22c5730ab9b07f390a6e6f222aab66c64b7a98c1897f3af9c60 Nov 27 17:12:11 crc kubenswrapper[4792]: I1127 17:12:11.030356 4792 patch_prober.go:28] interesting pod/router-default-5444994796-qgvjw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 27 17:12:11 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Nov 27 17:12:11 crc kubenswrapper[4792]: [+]process-running ok Nov 27 17:12:11 crc kubenswrapper[4792]: healthz check failed Nov 27 17:12:11 crc kubenswrapper[4792]: I1127 17:12:11.030416 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgvjw" podUID="101bdb1c-75a1-4d92-90e9-360cece56c1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:11 crc kubenswrapper[4792]: I1127 17:12:11.431197 4792 generic.go:334] "Generic (PLEG): container finished" podID="a53d4e7e-b60e-4c7b-91ce-f025197188d8" containerID="8c4d84c8dd65530d7bdd30b62be007bdab81e697572b1b1eb6446bac59235a96" exitCode=0 Nov 27 17:12:11 crc kubenswrapper[4792]: I1127 17:12:11.431306 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kt9ct" event={"ID":"a53d4e7e-b60e-4c7b-91ce-f025197188d8","Type":"ContainerDied","Data":"8c4d84c8dd65530d7bdd30b62be007bdab81e697572b1b1eb6446bac59235a96"} Nov 27 17:12:11 crc kubenswrapper[4792]: I1127 17:12:11.435672 4792 generic.go:334] "Generic (PLEG): container finished" podID="54d66969-c16d-43c7-adb3-64d00d1c451d" containerID="f779b5035019815d4c72f11c36575d8d8fa81f3e4d3b0f7c28c7d5aecd2ca1f5" exitCode=0 Nov 27 17:12:11 crc kubenswrapper[4792]: I1127 17:12:11.435748 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62nl2" event={"ID":"54d66969-c16d-43c7-adb3-64d00d1c451d","Type":"ContainerDied","Data":"f779b5035019815d4c72f11c36575d8d8fa81f3e4d3b0f7c28c7d5aecd2ca1f5"} Nov 27 17:12:11 crc kubenswrapper[4792]: I1127 17:12:11.435779 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62nl2" event={"ID":"54d66969-c16d-43c7-adb3-64d00d1c451d","Type":"ContainerStarted","Data":"de26b9ef1e8cd22c5730ab9b07f390a6e6f222aab66c64b7a98c1897f3af9c60"} Nov 27 17:12:11 crc kubenswrapper[4792]: I1127 17:12:11.463571 4792 generic.go:334] "Generic (PLEG): container finished" podID="d0b38dea-efc2-47a8-b840-2b00d2d04e6c" containerID="bd8a75e21481863d41fbb0a06c632c54610bfc4d35bbee03271dbcd829be5f54" exitCode=0 Nov 27 17:12:11 crc kubenswrapper[4792]: I1127 17:12:11.463609 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d0b38dea-efc2-47a8-b840-2b00d2d04e6c","Type":"ContainerDied","Data":"bd8a75e21481863d41fbb0a06c632c54610bfc4d35bbee03271dbcd829be5f54"} Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.027328 4792 patch_prober.go:28] interesting pod/router-default-5444994796-qgvjw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 27 17:12:12 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Nov 27 17:12:12 crc kubenswrapper[4792]: [+]process-running ok Nov 27 17:12:12 crc kubenswrapper[4792]: healthz check failed Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.027380 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgvjw" podUID="101bdb1c-75a1-4d92-90e9-360cece56c1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.705860 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.771543 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 27 17:12:12 crc kubenswrapper[4792]: E1127 17:12:12.772382 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b38dea-efc2-47a8-b840-2b00d2d04e6c" containerName="pruner" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.772498 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b38dea-efc2-47a8-b840-2b00d2d04e6c" containerName="pruner" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.772700 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b38dea-efc2-47a8-b840-2b00d2d04e6c" containerName="pruner" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.773352 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.778011 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.784070 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.784482 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.821036 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0b38dea-efc2-47a8-b840-2b00d2d04e6c-kubelet-dir\") pod \"d0b38dea-efc2-47a8-b840-2b00d2d04e6c\" (UID: \"d0b38dea-efc2-47a8-b840-2b00d2d04e6c\") " Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.821081 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0b38dea-efc2-47a8-b840-2b00d2d04e6c-kube-api-access\") pod \"d0b38dea-efc2-47a8-b840-2b00d2d04e6c\" (UID: \"d0b38dea-efc2-47a8-b840-2b00d2d04e6c\") " Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.821340 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf15e279-9121-4a57-a99c-5fb813743eac-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bf15e279-9121-4a57-a99c-5fb813743eac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.821366 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf15e279-9121-4a57-a99c-5fb813743eac-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bf15e279-9121-4a57-a99c-5fb813743eac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.821482 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0b38dea-efc2-47a8-b840-2b00d2d04e6c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d0b38dea-efc2-47a8-b840-2b00d2d04e6c" (UID: "d0b38dea-efc2-47a8-b840-2b00d2d04e6c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.828831 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b38dea-efc2-47a8-b840-2b00d2d04e6c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d0b38dea-efc2-47a8-b840-2b00d2d04e6c" (UID: "d0b38dea-efc2-47a8-b840-2b00d2d04e6c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.923283 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf15e279-9121-4a57-a99c-5fb813743eac-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bf15e279-9121-4a57-a99c-5fb813743eac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.923423 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf15e279-9121-4a57-a99c-5fb813743eac-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bf15e279-9121-4a57-a99c-5fb813743eac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.923538 4792 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d0b38dea-efc2-47a8-b840-2b00d2d04e6c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.923582 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d0b38dea-efc2-47a8-b840-2b00d2d04e6c-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.924135 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf15e279-9121-4a57-a99c-5fb813743eac-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"bf15e279-9121-4a57-a99c-5fb813743eac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 27 17:12:12 crc kubenswrapper[4792]: I1127 17:12:12.952510 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf15e279-9121-4a57-a99c-5fb813743eac-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"bf15e279-9121-4a57-a99c-5fb813743eac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 27 17:12:13 crc kubenswrapper[4792]: I1127 17:12:13.029359 4792 patch_prober.go:28] interesting pod/router-default-5444994796-qgvjw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 27 17:12:13 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Nov 27 17:12:13 crc kubenswrapper[4792]: [+]process-running ok Nov 27 17:12:13 crc kubenswrapper[4792]: healthz check failed Nov 27 17:12:13 crc kubenswrapper[4792]: I1127 17:12:13.029720 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgvjw" podUID="101bdb1c-75a1-4d92-90e9-360cece56c1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:13 crc kubenswrapper[4792]: I1127 17:12:13.104319 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 27 17:12:13 crc kubenswrapper[4792]: I1127 17:12:13.401915 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 27 17:12:13 crc kubenswrapper[4792]: I1127 17:12:13.501702 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bf15e279-9121-4a57-a99c-5fb813743eac","Type":"ContainerStarted","Data":"f0fd146a6cedc220cf02b00bc511f7b796f82add862142c17c5730397b3a6588"} Nov 27 17:12:13 crc kubenswrapper[4792]: I1127 17:12:13.507380 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 27 17:12:13 crc kubenswrapper[4792]: I1127 17:12:13.506984 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d0b38dea-efc2-47a8-b840-2b00d2d04e6c","Type":"ContainerDied","Data":"21b2d2d881a4274f52c5a712ad9d7a3e58d7738fb76db674a739aedff30ac99b"} Nov 27 17:12:13 crc kubenswrapper[4792]: I1127 17:12:13.507563 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21b2d2d881a4274f52c5a712ad9d7a3e58d7738fb76db674a739aedff30ac99b" Nov 27 17:12:14 crc kubenswrapper[4792]: I1127 17:12:14.045849 4792 patch_prober.go:28] interesting pod/router-default-5444994796-qgvjw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 27 17:12:14 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Nov 27 17:12:14 crc kubenswrapper[4792]: [+]process-running ok Nov 27 17:12:14 crc kubenswrapper[4792]: healthz check failed Nov 27 17:12:14 crc kubenswrapper[4792]: I1127 17:12:14.046102 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgvjw" podUID="101bdb1c-75a1-4d92-90e9-360cece56c1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:14 crc kubenswrapper[4792]: I1127 17:12:14.519749 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bf15e279-9121-4a57-a99c-5fb813743eac","Type":"ContainerStarted","Data":"2248579af60e4676f839c064e7b9fa9d57fd1761c4549e806d5da278566ff0d0"} Nov 27 17:12:14 crc kubenswrapper[4792]: I1127 17:12:14.536129 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.5361114110000003 podStartE2EDuration="2.536111411s" podCreationTimestamp="2025-11-27 17:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:14.535656878 +0000 UTC m=+156.878483186" watchObservedRunningTime="2025-11-27 17:12:14.536111411 +0000 UTC m=+156.878937729" Nov 27 17:12:15 crc kubenswrapper[4792]: I1127 17:12:15.029605 4792 patch_prober.go:28] interesting pod/router-default-5444994796-qgvjw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 27 17:12:15 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Nov 27 17:12:15 crc kubenswrapper[4792]: [+]process-running ok Nov 27 17:12:15 crc kubenswrapper[4792]: healthz check failed Nov 27 17:12:15 crc kubenswrapper[4792]: I1127 17:12:15.029677 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgvjw" podUID="101bdb1c-75a1-4d92-90e9-360cece56c1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:15 crc kubenswrapper[4792]: I1127 17:12:15.140291 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-4855s" Nov 27 17:12:15 crc kubenswrapper[4792]: I1127 17:12:15.530284 4792 generic.go:334] "Generic (PLEG): container finished" podID="bf15e279-9121-4a57-a99c-5fb813743eac" containerID="2248579af60e4676f839c064e7b9fa9d57fd1761c4549e806d5da278566ff0d0" exitCode=0 Nov 27 17:12:15 crc kubenswrapper[4792]: I1127 17:12:15.530329 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bf15e279-9121-4a57-a99c-5fb813743eac","Type":"ContainerDied","Data":"2248579af60e4676f839c064e7b9fa9d57fd1761c4549e806d5da278566ff0d0"} Nov 27 17:12:16 crc kubenswrapper[4792]: I1127 17:12:16.028342 4792 patch_prober.go:28] interesting pod/router-default-5444994796-qgvjw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 27 17:12:16 crc kubenswrapper[4792]: [-]has-synced failed: reason withheld Nov 27 17:12:16 crc kubenswrapper[4792]: [+]process-running ok Nov 27 17:12:16 crc kubenswrapper[4792]: healthz check failed Nov 27 17:12:16 crc kubenswrapper[4792]: I1127 17:12:16.028395 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qgvjw" podUID="101bdb1c-75a1-4d92-90e9-360cece56c1e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:12:17 crc kubenswrapper[4792]: I1127 17:12:17.030499 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:12:17 crc kubenswrapper[4792]: I1127 17:12:17.032941 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qgvjw" Nov 27 17:12:19 crc kubenswrapper[4792]: I1127 17:12:19.665850 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-r96bc" Nov 27 17:12:19 crc kubenswrapper[4792]: I1127 17:12:19.699455 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:12:19 crc kubenswrapper[4792]: I1127 17:12:19.705733 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:12:21 crc kubenswrapper[4792]: I1127 17:12:21.305852 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 27 17:12:21 crc kubenswrapper[4792]: I1127 17:12:21.496006 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf15e279-9121-4a57-a99c-5fb813743eac-kube-api-access\") pod \"bf15e279-9121-4a57-a99c-5fb813743eac\" (UID: \"bf15e279-9121-4a57-a99c-5fb813743eac\") " Nov 27 17:12:21 crc kubenswrapper[4792]: I1127 17:12:21.496955 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf15e279-9121-4a57-a99c-5fb813743eac-kubelet-dir\") pod \"bf15e279-9121-4a57-a99c-5fb813743eac\" (UID: \"bf15e279-9121-4a57-a99c-5fb813743eac\") " Nov 27 17:12:21 crc kubenswrapper[4792]: I1127 17:12:21.497233 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf15e279-9121-4a57-a99c-5fb813743eac-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bf15e279-9121-4a57-a99c-5fb813743eac" (UID: "bf15e279-9121-4a57-a99c-5fb813743eac"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:12:21 crc kubenswrapper[4792]: I1127 17:12:21.498058 4792 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf15e279-9121-4a57-a99c-5fb813743eac-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 27 17:12:21 crc kubenswrapper[4792]: I1127 17:12:21.502635 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf15e279-9121-4a57-a99c-5fb813743eac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bf15e279-9121-4a57-a99c-5fb813743eac" (UID: "bf15e279-9121-4a57-a99c-5fb813743eac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:12:21 crc kubenswrapper[4792]: I1127 17:12:21.565063 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"bf15e279-9121-4a57-a99c-5fb813743eac","Type":"ContainerDied","Data":"f0fd146a6cedc220cf02b00bc511f7b796f82add862142c17c5730397b3a6588"} Nov 27 17:12:21 crc kubenswrapper[4792]: I1127 17:12:21.565102 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0fd146a6cedc220cf02b00bc511f7b796f82add862142c17c5730397b3a6588" Nov 27 17:12:21 crc kubenswrapper[4792]: I1127 17:12:21.565152 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 27 17:12:21 crc kubenswrapper[4792]: I1127 17:12:21.599080 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf15e279-9121-4a57-a99c-5fb813743eac-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 27 17:12:27 crc kubenswrapper[4792]: I1127 17:12:27.482970 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:12:36 crc kubenswrapper[4792]: E1127 17:12:36.520457 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 27 17:12:36 crc kubenswrapper[4792]: E1127 17:12:36.521132 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp5bb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8kfjw_openshift-marketplace(28aedb43-5391-4868-b839-54e2857d62c7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 27 17:12:36 crc kubenswrapper[4792]: E1127 17:12:36.522347 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8kfjw" podUID="28aedb43-5391-4868-b839-54e2857d62c7" Nov 27 17:12:38 crc kubenswrapper[4792]: I1127 17:12:38.290408 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:12:38 crc kubenswrapper[4792]: I1127 17:12:38.290478 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:12:38 crc kubenswrapper[4792]: E1127 17:12:38.351230 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8kfjw" podUID="28aedb43-5391-4868-b839-54e2857d62c7" Nov 27 17:12:40 crc kubenswrapper[4792]: I1127 17:12:40.127173 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hxdbd" Nov 27 17:12:43 crc kubenswrapper[4792]: E1127 17:12:43.450788 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 27 17:12:43 crc kubenswrapper[4792]: E1127 17:12:43.451229 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qkkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6p66q_openshift-marketplace(6879b0d9-3b16-4233-a329-ed6fd9c58bd8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 27 17:12:43 crc kubenswrapper[4792]: E1127 17:12:43.452541 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6p66q" podUID="6879b0d9-3b16-4233-a329-ed6fd9c58bd8" Nov 27 17:12:45 crc kubenswrapper[4792]: E1127 17:12:45.339771 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 27 17:12:45 crc kubenswrapper[4792]: E1127 17:12:45.340142 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sk6q9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-d57cd_openshift-marketplace(a9d4ac25-3b3d-44a2-b384-0cc385078f80): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 27 17:12:45 crc kubenswrapper[4792]: E1127 17:12:45.341944 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-d57cd" podUID="a9d4ac25-3b3d-44a2-b384-0cc385078f80" Nov 27 17:12:45 crc kubenswrapper[4792]: E1127 17:12:45.362227 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 27 17:12:45 crc kubenswrapper[4792]: E1127 17:12:45.362428 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9v45k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-c8djv_openshift-marketplace(0aacc646-59f3-41fe-b59b-ce5fed81861f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 27 17:12:45 crc kubenswrapper[4792]: E1127 17:12:45.364148 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-c8djv" podUID="0aacc646-59f3-41fe-b59b-ce5fed81861f" Nov 27 17:12:46 crc kubenswrapper[4792]: I1127 17:12:46.920968 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 27 17:12:48 crc kubenswrapper[4792]: E1127 17:12:48.093760 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-c8djv" podUID="0aacc646-59f3-41fe-b59b-ce5fed81861f" Nov 27 17:12:48 crc kubenswrapper[4792]: E1127 17:12:48.093786 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6p66q" podUID="6879b0d9-3b16-4233-a329-ed6fd9c58bd8" Nov 27 17:12:48 crc kubenswrapper[4792]: E1127 17:12:48.093911 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-d57cd" podUID="a9d4ac25-3b3d-44a2-b384-0cc385078f80" Nov 27 17:12:48 crc kubenswrapper[4792]: E1127 17:12:48.125831 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 27 17:12:48 crc kubenswrapper[4792]: E1127 17:12:48.126038 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2vhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-62nl2_openshift-marketplace(54d66969-c16d-43c7-adb3-64d00d1c451d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 27 17:12:48 crc kubenswrapper[4792]: E1127 17:12:48.127272 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-62nl2" podUID="54d66969-c16d-43c7-adb3-64d00d1c451d" Nov 27 17:12:48 crc kubenswrapper[4792]: E1127 17:12:48.167690 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 27 17:12:48 crc kubenswrapper[4792]: E1127 17:12:48.167892 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vpmcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kt9ct_openshift-marketplace(a53d4e7e-b60e-4c7b-91ce-f025197188d8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 27 17:12:48 crc kubenswrapper[4792]: E1127 17:12:48.170089 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kt9ct" podUID="a53d4e7e-b60e-4c7b-91ce-f025197188d8" Nov 27 17:12:48 crc kubenswrapper[4792]: E1127 17:12:48.214576 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 27 17:12:48 crc kubenswrapper[4792]: E1127 17:12:48.214765 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lqz5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tqrlt_openshift-marketplace(bfe5d9aa-13db-4750-b440-dc1e83e149f0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 27 17:12:48 crc kubenswrapper[4792]: E1127 17:12:48.215931 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tqrlt" podUID="bfe5d9aa-13db-4750-b440-dc1e83e149f0" Nov 27 17:12:48 crc kubenswrapper[4792]: I1127 17:12:48.722405 4792 generic.go:334] "Generic (PLEG): container finished" podID="8a2b6ebf-ba92-4d93-b98c-8df09cea427e" containerID="876842cada3c42d1617ae08c2cbd4fad880433cdda31fbff5700dc146ae316e0" exitCode=0 Nov 27 17:12:48 crc kubenswrapper[4792]: I1127 17:12:48.722548 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26frg" event={"ID":"8a2b6ebf-ba92-4d93-b98c-8df09cea427e","Type":"ContainerDied","Data":"876842cada3c42d1617ae08c2cbd4fad880433cdda31fbff5700dc146ae316e0"} Nov 27 17:12:48 crc kubenswrapper[4792]: E1127 17:12:48.727731 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-kt9ct" podUID="a53d4e7e-b60e-4c7b-91ce-f025197188d8" Nov 27 17:12:48 crc kubenswrapper[4792]: E1127 17:12:48.727879 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tqrlt" podUID="bfe5d9aa-13db-4750-b440-dc1e83e149f0" Nov 27 17:12:48 crc kubenswrapper[4792]: E1127 17:12:48.731958 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-62nl2" podUID="54d66969-c16d-43c7-adb3-64d00d1c451d" Nov 27 17:12:49 crc kubenswrapper[4792]: I1127 17:12:49.729391 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26frg" event={"ID":"8a2b6ebf-ba92-4d93-b98c-8df09cea427e","Type":"ContainerStarted","Data":"b5878790f41e77030c0e85dd26b56f1530596ddfc09187538f1739f560723a6d"} Nov 27 17:12:50 crc kubenswrapper[4792]: I1127 17:12:50.714514 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-26frg" podStartSLOduration=2.898551936 podStartE2EDuration="42.714496211s" podCreationTimestamp="2025-11-27 17:12:08 +0000 UTC" firstStartedPulling="2025-11-27 17:12:09.385627124 +0000 UTC m=+151.728453442" lastFinishedPulling="2025-11-27 17:12:49.201571359 +0000 UTC m=+191.544397717" observedRunningTime="2025-11-27 17:12:49.753712675 +0000 UTC m=+192.096539013" watchObservedRunningTime="2025-11-27 17:12:50.714496211 +0000 UTC m=+193.057322539" Nov 27 17:12:50 crc kubenswrapper[4792]: I1127 17:12:50.923126 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 27 17:12:50 crc kubenswrapper[4792]: E1127 17:12:50.924468 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf15e279-9121-4a57-a99c-5fb813743eac" containerName="pruner" Nov 27 17:12:50 crc kubenswrapper[4792]: I1127 17:12:50.924487 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf15e279-9121-4a57-a99c-5fb813743eac" containerName="pruner" Nov 27 17:12:50 crc kubenswrapper[4792]: I1127 17:12:50.924629 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf15e279-9121-4a57-a99c-5fb813743eac" containerName="pruner" Nov 27 17:12:50 crc kubenswrapper[4792]: I1127 17:12:50.928702 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 27 17:12:50 crc kubenswrapper[4792]: I1127 17:12:50.929562 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 27 17:12:50 crc kubenswrapper[4792]: I1127 17:12:50.940526 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 27 17:12:50 crc kubenswrapper[4792]: I1127 17:12:50.940856 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 27 17:12:51 crc kubenswrapper[4792]: I1127 17:12:51.022307 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f83a303-0864-4e6c-a5ca-8d53e2a64edf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5f83a303-0864-4e6c-a5ca-8d53e2a64edf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 27 17:12:51 crc kubenswrapper[4792]: I1127 17:12:51.022385 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f83a303-0864-4e6c-a5ca-8d53e2a64edf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5f83a303-0864-4e6c-a5ca-8d53e2a64edf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 27 17:12:51 crc kubenswrapper[4792]: I1127 17:12:51.123239 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f83a303-0864-4e6c-a5ca-8d53e2a64edf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5f83a303-0864-4e6c-a5ca-8d53e2a64edf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 27 17:12:51 crc kubenswrapper[4792]: I1127 17:12:51.123347 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f83a303-0864-4e6c-a5ca-8d53e2a64edf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5f83a303-0864-4e6c-a5ca-8d53e2a64edf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 27 17:12:51 crc kubenswrapper[4792]: I1127 17:12:51.123426 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f83a303-0864-4e6c-a5ca-8d53e2a64edf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5f83a303-0864-4e6c-a5ca-8d53e2a64edf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 27 17:12:51 crc kubenswrapper[4792]: I1127 17:12:51.153603 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f83a303-0864-4e6c-a5ca-8d53e2a64edf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5f83a303-0864-4e6c-a5ca-8d53e2a64edf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 27 17:12:51 crc kubenswrapper[4792]: I1127 17:12:51.253370 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 27 17:12:51 crc kubenswrapper[4792]: I1127 17:12:51.691813 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 27 17:12:51 crc kubenswrapper[4792]: W1127 17:12:51.702753 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5f83a303_0864_4e6c_a5ca_8d53e2a64edf.slice/crio-fbe75dac53034ef1b6c24644544edcd6b3db9c7be34d324cf485a0d197f9abb1 WatchSource:0}: Error finding container fbe75dac53034ef1b6c24644544edcd6b3db9c7be34d324cf485a0d197f9abb1: Status 404 returned error can't find the container with id fbe75dac53034ef1b6c24644544edcd6b3db9c7be34d324cf485a0d197f9abb1 Nov 27 17:12:51 crc kubenswrapper[4792]: I1127 17:12:51.741180 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5f83a303-0864-4e6c-a5ca-8d53e2a64edf","Type":"ContainerStarted","Data":"fbe75dac53034ef1b6c24644544edcd6b3db9c7be34d324cf485a0d197f9abb1"} Nov 27 17:12:52 crc kubenswrapper[4792]: I1127 17:12:52.759496 4792 generic.go:334] "Generic (PLEG): container finished" podID="28aedb43-5391-4868-b839-54e2857d62c7" containerID="465484a0e5d91cb8402f4bbb87831740f0c289478266cfc27123c2dbd63bcc13" exitCode=0 Nov 27 17:12:52 crc kubenswrapper[4792]: I1127 17:12:52.759572 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kfjw" event={"ID":"28aedb43-5391-4868-b839-54e2857d62c7","Type":"ContainerDied","Data":"465484a0e5d91cb8402f4bbb87831740f0c289478266cfc27123c2dbd63bcc13"} Nov 27 17:12:52 crc kubenswrapper[4792]: I1127 17:12:52.762955 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5f83a303-0864-4e6c-a5ca-8d53e2a64edf","Type":"ContainerStarted","Data":"bac27726bb62285ec3d7adbbf0e6789194d018604581c9fb7e5b36623fd2209a"} Nov 27 17:12:52 crc kubenswrapper[4792]: I1127 17:12:52.810710 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.8106948210000002 podStartE2EDuration="2.810694821s" podCreationTimestamp="2025-11-27 17:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:52.806523378 +0000 UTC m=+195.149349696" watchObservedRunningTime="2025-11-27 17:12:52.810694821 +0000 UTC m=+195.153521139" Nov 27 17:12:53 crc kubenswrapper[4792]: I1127 17:12:53.769279 4792 generic.go:334] "Generic (PLEG): container finished" podID="5f83a303-0864-4e6c-a5ca-8d53e2a64edf" containerID="bac27726bb62285ec3d7adbbf0e6789194d018604581c9fb7e5b36623fd2209a" exitCode=0 Nov 27 17:12:53 crc kubenswrapper[4792]: I1127 17:12:53.769375 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5f83a303-0864-4e6c-a5ca-8d53e2a64edf","Type":"ContainerDied","Data":"bac27726bb62285ec3d7adbbf0e6789194d018604581c9fb7e5b36623fd2209a"} Nov 27 17:12:53 crc kubenswrapper[4792]: I1127 17:12:53.773241 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kfjw" event={"ID":"28aedb43-5391-4868-b839-54e2857d62c7","Type":"ContainerStarted","Data":"34699664bdfbb62220f3ff185873bd433e4793a7bf609a3c8cbdf0bad7b9e931"} Nov 27 17:12:55 crc kubenswrapper[4792]: I1127 17:12:55.012153 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 27 17:12:55 crc kubenswrapper[4792]: I1127 17:12:55.031780 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8kfjw" podStartSLOduration=3.918702815 podStartE2EDuration="48.031759037s" podCreationTimestamp="2025-11-27 17:12:07 +0000 UTC" firstStartedPulling="2025-11-27 17:12:09.388515 +0000 UTC m=+151.731341318" lastFinishedPulling="2025-11-27 17:12:53.501571192 +0000 UTC m=+195.844397540" observedRunningTime="2025-11-27 17:12:53.803530188 +0000 UTC m=+196.146356496" watchObservedRunningTime="2025-11-27 17:12:55.031759037 +0000 UTC m=+197.374585365" Nov 27 17:12:55 crc kubenswrapper[4792]: I1127 17:12:55.178300 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f83a303-0864-4e6c-a5ca-8d53e2a64edf-kube-api-access\") pod \"5f83a303-0864-4e6c-a5ca-8d53e2a64edf\" (UID: \"5f83a303-0864-4e6c-a5ca-8d53e2a64edf\") " Nov 27 17:12:55 crc kubenswrapper[4792]: I1127 17:12:55.178407 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f83a303-0864-4e6c-a5ca-8d53e2a64edf-kubelet-dir\") pod \"5f83a303-0864-4e6c-a5ca-8d53e2a64edf\" (UID: \"5f83a303-0864-4e6c-a5ca-8d53e2a64edf\") " Nov 27 17:12:55 crc kubenswrapper[4792]: I1127 17:12:55.178588 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f83a303-0864-4e6c-a5ca-8d53e2a64edf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5f83a303-0864-4e6c-a5ca-8d53e2a64edf" (UID: "5f83a303-0864-4e6c-a5ca-8d53e2a64edf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:12:55 crc kubenswrapper[4792]: I1127 17:12:55.193624 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f83a303-0864-4e6c-a5ca-8d53e2a64edf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5f83a303-0864-4e6c-a5ca-8d53e2a64edf" (UID: "5f83a303-0864-4e6c-a5ca-8d53e2a64edf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:12:55 crc kubenswrapper[4792]: I1127 17:12:55.280116 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5f83a303-0864-4e6c-a5ca-8d53e2a64edf-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 27 17:12:55 crc kubenswrapper[4792]: I1127 17:12:55.280161 4792 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5f83a303-0864-4e6c-a5ca-8d53e2a64edf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 27 17:12:55 crc kubenswrapper[4792]: I1127 17:12:55.781780 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5f83a303-0864-4e6c-a5ca-8d53e2a64edf","Type":"ContainerDied","Data":"fbe75dac53034ef1b6c24644544edcd6b3db9c7be34d324cf485a0d197f9abb1"} Nov 27 17:12:55 crc kubenswrapper[4792]: I1127 17:12:55.781823 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbe75dac53034ef1b6c24644544edcd6b3db9c7be34d324cf485a0d197f9abb1" Nov 27 17:12:55 crc kubenswrapper[4792]: I1127 17:12:55.781850 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.116461 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 27 17:12:58 crc kubenswrapper[4792]: E1127 17:12:58.117033 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f83a303-0864-4e6c-a5ca-8d53e2a64edf" containerName="pruner" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.117048 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f83a303-0864-4e6c-a5ca-8d53e2a64edf" containerName="pruner" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.117172 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f83a303-0864-4e6c-a5ca-8d53e2a64edf" containerName="pruner" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.117602 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.119460 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.119735 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.125352 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.242559 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.242622 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.315255 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.315306 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-kube-api-access\") pod \"installer-9-crc\" (UID: \"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.315408 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-var-lock\") pod \"installer-9-crc\" (UID: \"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.416918 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-var-lock\") pod \"installer-9-crc\" (UID: \"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.417024 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-var-lock\") pod \"installer-9-crc\" (UID: \"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.417058 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.417097 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-kube-api-access\") pod \"installer-9-crc\" (UID: \"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.417362 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.435143 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-kube-api-access\") pod \"installer-9-crc\" (UID: \"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.438984 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.550960 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.675968 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.676020 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.720469 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.831193 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 27 17:12:58 crc kubenswrapper[4792]: W1127 17:12:58.839507 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3bf54b70_6a9d_43cf_829a_35d0abe3bcc0.slice/crio-fad0b0207be32f756a299791a2130c90c4d573635cdb3dae8f9bb98782a97de0 WatchSource:0}: Error finding container fad0b0207be32f756a299791a2130c90c4d573635cdb3dae8f9bb98782a97de0: Status 404 returned error can't find the container with id fad0b0207be32f756a299791a2130c90c4d573635cdb3dae8f9bb98782a97de0 Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.841672 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:12:58 crc kubenswrapper[4792]: I1127 17:12:58.843050 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:12:59 crc kubenswrapper[4792]: I1127 17:12:59.781775 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-26frg"] Nov 27 17:12:59 crc kubenswrapper[4792]: I1127 17:12:59.803822 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0","Type":"ContainerStarted","Data":"deaa42c03a8fa19bc786a9bf42cb3dfe0a9aae6ed870d91e56913b36762d15d8"} Nov 27 17:12:59 crc kubenswrapper[4792]: I1127 17:12:59.803878 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0","Type":"ContainerStarted","Data":"fad0b0207be32f756a299791a2130c90c4d573635cdb3dae8f9bb98782a97de0"} Nov 27 17:12:59 crc kubenswrapper[4792]: I1127 17:12:59.820296 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.820257288 podStartE2EDuration="1.820257288s" podCreationTimestamp="2025-11-27 17:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:12:59.818068682 +0000 UTC m=+202.160895060" watchObservedRunningTime="2025-11-27 17:12:59.820257288 +0000 UTC m=+202.163083616" Nov 27 17:13:00 crc kubenswrapper[4792]: I1127 17:13:00.808811 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-26frg" podUID="8a2b6ebf-ba92-4d93-b98c-8df09cea427e" containerName="registry-server" containerID="cri-o://b5878790f41e77030c0e85dd26b56f1530596ddfc09187538f1739f560723a6d" gracePeriod=2 Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.264833 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.457576 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdgr7\" (UniqueName: \"kubernetes.io/projected/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-kube-api-access-pdgr7\") pod \"8a2b6ebf-ba92-4d93-b98c-8df09cea427e\" (UID: \"8a2b6ebf-ba92-4d93-b98c-8df09cea427e\") " Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.457696 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-catalog-content\") pod \"8a2b6ebf-ba92-4d93-b98c-8df09cea427e\" (UID: \"8a2b6ebf-ba92-4d93-b98c-8df09cea427e\") " Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.457756 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-utilities\") pod \"8a2b6ebf-ba92-4d93-b98c-8df09cea427e\" (UID: \"8a2b6ebf-ba92-4d93-b98c-8df09cea427e\") " Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.461002 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-utilities" (OuterVolumeSpecName: "utilities") pod "8a2b6ebf-ba92-4d93-b98c-8df09cea427e" (UID: "8a2b6ebf-ba92-4d93-b98c-8df09cea427e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.465121 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-kube-api-access-pdgr7" (OuterVolumeSpecName: "kube-api-access-pdgr7") pod "8a2b6ebf-ba92-4d93-b98c-8df09cea427e" (UID: "8a2b6ebf-ba92-4d93-b98c-8df09cea427e"). InnerVolumeSpecName "kube-api-access-pdgr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.476864 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a2b6ebf-ba92-4d93-b98c-8df09cea427e" (UID: "8a2b6ebf-ba92-4d93-b98c-8df09cea427e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.558887 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdgr7\" (UniqueName: \"kubernetes.io/projected/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-kube-api-access-pdgr7\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.558930 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.558942 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a2b6ebf-ba92-4d93-b98c-8df09cea427e-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.816151 4792 generic.go:334] "Generic (PLEG): container finished" podID="8a2b6ebf-ba92-4d93-b98c-8df09cea427e" containerID="b5878790f41e77030c0e85dd26b56f1530596ddfc09187538f1739f560723a6d" exitCode=0 Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.816221 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26frg" event={"ID":"8a2b6ebf-ba92-4d93-b98c-8df09cea427e","Type":"ContainerDied","Data":"b5878790f41e77030c0e85dd26b56f1530596ddfc09187538f1739f560723a6d"} Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.816253 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26frg" event={"ID":"8a2b6ebf-ba92-4d93-b98c-8df09cea427e","Type":"ContainerDied","Data":"6a2fcdfe99d191adcb61730e4cfda55f6bf5d3e009838a1a7273a714c723ff03"} Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.816274 4792 scope.go:117] "RemoveContainer" containerID="b5878790f41e77030c0e85dd26b56f1530596ddfc09187538f1739f560723a6d" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.816403 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26frg" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.818459 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6p66q" event={"ID":"6879b0d9-3b16-4233-a329-ed6fd9c58bd8","Type":"ContainerStarted","Data":"3764f25969790377deb96888c26e5f6b2d60140362c8a7e89ca16484a64ae54e"} Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.829959 4792 scope.go:117] "RemoveContainer" containerID="876842cada3c42d1617ae08c2cbd4fad880433cdda31fbff5700dc146ae316e0" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.844857 4792 scope.go:117] "RemoveContainer" containerID="d7f7e76a7fddf081f8688c3ec8facdcbdfee927fb692e29edbb80c9d7327a008" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.852574 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-26frg"] Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.855313 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-26frg"] Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.858563 4792 scope.go:117] "RemoveContainer" containerID="b5878790f41e77030c0e85dd26b56f1530596ddfc09187538f1739f560723a6d" Nov 27 17:13:01 crc kubenswrapper[4792]: E1127 17:13:01.858995 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5878790f41e77030c0e85dd26b56f1530596ddfc09187538f1739f560723a6d\": container with ID starting with b5878790f41e77030c0e85dd26b56f1530596ddfc09187538f1739f560723a6d not found: ID does not exist" containerID="b5878790f41e77030c0e85dd26b56f1530596ddfc09187538f1739f560723a6d" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.859029 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5878790f41e77030c0e85dd26b56f1530596ddfc09187538f1739f560723a6d"} err="failed to get container status \"b5878790f41e77030c0e85dd26b56f1530596ddfc09187538f1739f560723a6d\": rpc error: code = NotFound desc = could not find container \"b5878790f41e77030c0e85dd26b56f1530596ddfc09187538f1739f560723a6d\": container with ID starting with b5878790f41e77030c0e85dd26b56f1530596ddfc09187538f1739f560723a6d not found: ID does not exist" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.859124 4792 scope.go:117] "RemoveContainer" containerID="876842cada3c42d1617ae08c2cbd4fad880433cdda31fbff5700dc146ae316e0" Nov 27 17:13:01 crc kubenswrapper[4792]: E1127 17:13:01.859454 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"876842cada3c42d1617ae08c2cbd4fad880433cdda31fbff5700dc146ae316e0\": container with ID starting with 876842cada3c42d1617ae08c2cbd4fad880433cdda31fbff5700dc146ae316e0 not found: ID does not exist" containerID="876842cada3c42d1617ae08c2cbd4fad880433cdda31fbff5700dc146ae316e0" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.859500 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"876842cada3c42d1617ae08c2cbd4fad880433cdda31fbff5700dc146ae316e0"} err="failed to get container status \"876842cada3c42d1617ae08c2cbd4fad880433cdda31fbff5700dc146ae316e0\": rpc error: code = NotFound desc = could not find container \"876842cada3c42d1617ae08c2cbd4fad880433cdda31fbff5700dc146ae316e0\": container with ID starting with 876842cada3c42d1617ae08c2cbd4fad880433cdda31fbff5700dc146ae316e0 not found: ID does not exist" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.859530 4792 scope.go:117] "RemoveContainer" containerID="d7f7e76a7fddf081f8688c3ec8facdcbdfee927fb692e29edbb80c9d7327a008" Nov 27 17:13:01 crc kubenswrapper[4792]: E1127 17:13:01.859816 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7f7e76a7fddf081f8688c3ec8facdcbdfee927fb692e29edbb80c9d7327a008\": container with ID starting with d7f7e76a7fddf081f8688c3ec8facdcbdfee927fb692e29edbb80c9d7327a008 not found: ID does not exist" containerID="d7f7e76a7fddf081f8688c3ec8facdcbdfee927fb692e29edbb80c9d7327a008" Nov 27 17:13:01 crc kubenswrapper[4792]: I1127 17:13:01.859847 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7f7e76a7fddf081f8688c3ec8facdcbdfee927fb692e29edbb80c9d7327a008"} err="failed to get container status \"d7f7e76a7fddf081f8688c3ec8facdcbdfee927fb692e29edbb80c9d7327a008\": rpc error: code = NotFound desc = could not find container \"d7f7e76a7fddf081f8688c3ec8facdcbdfee927fb692e29edbb80c9d7327a008\": container with ID starting with d7f7e76a7fddf081f8688c3ec8facdcbdfee927fb692e29edbb80c9d7327a008 not found: ID does not exist" Nov 27 17:13:02 crc kubenswrapper[4792]: I1127 17:13:02.700042 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a2b6ebf-ba92-4d93-b98c-8df09cea427e" path="/var/lib/kubelet/pods/8a2b6ebf-ba92-4d93-b98c-8df09cea427e/volumes" Nov 27 17:13:02 crc kubenswrapper[4792]: I1127 17:13:02.825925 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62nl2" event={"ID":"54d66969-c16d-43c7-adb3-64d00d1c451d","Type":"ContainerStarted","Data":"1c00eb4a554b8f997aca0995bedead080ce5afe25c6d1f14be38cedc1f6a2a68"} Nov 27 17:13:02 crc kubenswrapper[4792]: I1127 17:13:02.829198 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8djv" event={"ID":"0aacc646-59f3-41fe-b59b-ce5fed81861f","Type":"ContainerStarted","Data":"3850a05171d77fe4c6e45e46384629bfcf0abae597ad7b85ae0325a8b30e9d3b"} Nov 27 17:13:02 crc kubenswrapper[4792]: I1127 17:13:02.832374 4792 generic.go:334] "Generic (PLEG): container finished" podID="6879b0d9-3b16-4233-a329-ed6fd9c58bd8" containerID="3764f25969790377deb96888c26e5f6b2d60140362c8a7e89ca16484a64ae54e" exitCode=0 Nov 27 17:13:02 crc kubenswrapper[4792]: I1127 17:13:02.832406 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6p66q" event={"ID":"6879b0d9-3b16-4233-a329-ed6fd9c58bd8","Type":"ContainerDied","Data":"3764f25969790377deb96888c26e5f6b2d60140362c8a7e89ca16484a64ae54e"} Nov 27 17:13:03 crc kubenswrapper[4792]: I1127 17:13:03.839413 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9d4ac25-3b3d-44a2-b384-0cc385078f80" containerID="56646d8e23bf2da2970185bec17ef3505ba4872a4c9a819a4db506b91023c393" exitCode=0 Nov 27 17:13:03 crc kubenswrapper[4792]: I1127 17:13:03.839474 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d57cd" event={"ID":"a9d4ac25-3b3d-44a2-b384-0cc385078f80","Type":"ContainerDied","Data":"56646d8e23bf2da2970185bec17ef3505ba4872a4c9a819a4db506b91023c393"} Nov 27 17:13:03 crc kubenswrapper[4792]: I1127 17:13:03.841294 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kt9ct" event={"ID":"a53d4e7e-b60e-4c7b-91ce-f025197188d8","Type":"ContainerStarted","Data":"f14fee108d5749f2fcbe8bc056998babc7bdae82652758f53484a0ab1e4152dd"} Nov 27 17:13:03 crc kubenswrapper[4792]: I1127 17:13:03.844396 4792 generic.go:334] "Generic (PLEG): container finished" podID="54d66969-c16d-43c7-adb3-64d00d1c451d" containerID="1c00eb4a554b8f997aca0995bedead080ce5afe25c6d1f14be38cedc1f6a2a68" exitCode=0 Nov 27 17:13:03 crc kubenswrapper[4792]: I1127 17:13:03.844464 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62nl2" event={"ID":"54d66969-c16d-43c7-adb3-64d00d1c451d","Type":"ContainerDied","Data":"1c00eb4a554b8f997aca0995bedead080ce5afe25c6d1f14be38cedc1f6a2a68"} Nov 27 17:13:03 crc kubenswrapper[4792]: I1127 17:13:03.847479 4792 generic.go:334] "Generic (PLEG): container finished" podID="0aacc646-59f3-41fe-b59b-ce5fed81861f" containerID="3850a05171d77fe4c6e45e46384629bfcf0abae597ad7b85ae0325a8b30e9d3b" exitCode=0 Nov 27 17:13:03 crc kubenswrapper[4792]: I1127 17:13:03.847532 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8djv" event={"ID":"0aacc646-59f3-41fe-b59b-ce5fed81861f","Type":"ContainerDied","Data":"3850a05171d77fe4c6e45e46384629bfcf0abae597ad7b85ae0325a8b30e9d3b"} Nov 27 17:13:03 crc kubenswrapper[4792]: I1127 17:13:03.850093 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6p66q" event={"ID":"6879b0d9-3b16-4233-a329-ed6fd9c58bd8","Type":"ContainerStarted","Data":"6dc4b8cfede4553ded8c18fea128a4a04f434a6c1917874b8810591ac76d4d89"} Nov 27 17:13:03 crc kubenswrapper[4792]: I1127 17:13:03.900557 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6p66q" podStartSLOduration=2.809384655 podStartE2EDuration="57.900537313s" podCreationTimestamp="2025-11-27 17:12:06 +0000 UTC" firstStartedPulling="2025-11-27 17:12:08.280627211 +0000 UTC m=+150.623453529" lastFinishedPulling="2025-11-27 17:13:03.371779869 +0000 UTC m=+205.714606187" observedRunningTime="2025-11-27 17:13:03.898425079 +0000 UTC m=+206.241251407" watchObservedRunningTime="2025-11-27 17:13:03.900537313 +0000 UTC m=+206.243363631" Nov 27 17:13:04 crc kubenswrapper[4792]: I1127 17:13:04.857280 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d57cd" event={"ID":"a9d4ac25-3b3d-44a2-b384-0cc385078f80","Type":"ContainerStarted","Data":"b2740ea9ab30fc41eda080faec1fb990391a9a960771da2c28f710dfb300dafc"} Nov 27 17:13:04 crc kubenswrapper[4792]: I1127 17:13:04.858714 4792 generic.go:334] "Generic (PLEG): container finished" podID="a53d4e7e-b60e-4c7b-91ce-f025197188d8" containerID="f14fee108d5749f2fcbe8bc056998babc7bdae82652758f53484a0ab1e4152dd" exitCode=0 Nov 27 17:13:04 crc kubenswrapper[4792]: I1127 17:13:04.858803 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kt9ct" event={"ID":"a53d4e7e-b60e-4c7b-91ce-f025197188d8","Type":"ContainerDied","Data":"f14fee108d5749f2fcbe8bc056998babc7bdae82652758f53484a0ab1e4152dd"} Nov 27 17:13:04 crc kubenswrapper[4792]: I1127 17:13:04.860942 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62nl2" event={"ID":"54d66969-c16d-43c7-adb3-64d00d1c451d","Type":"ContainerStarted","Data":"aa23f8bf584024fed934b4287ef80d7805c2bdcfcd3237a975bbd710d5ec8de6"} Nov 27 17:13:04 crc kubenswrapper[4792]: I1127 17:13:04.862567 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8djv" event={"ID":"0aacc646-59f3-41fe-b59b-ce5fed81861f","Type":"ContainerStarted","Data":"00f2a9869f5cf265615b2f2ea337c72b94a53dc5b0545e62b53d7703fe278825"} Nov 27 17:13:04 crc kubenswrapper[4792]: I1127 17:13:04.864132 4792 generic.go:334] "Generic (PLEG): container finished" podID="bfe5d9aa-13db-4750-b440-dc1e83e149f0" containerID="0e5fcc844d4d904f48d63369fb0866a7b676fc782313b2633fcd025632425f6a" exitCode=0 Nov 27 17:13:04 crc kubenswrapper[4792]: I1127 17:13:04.864166 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqrlt" event={"ID":"bfe5d9aa-13db-4750-b440-dc1e83e149f0","Type":"ContainerDied","Data":"0e5fcc844d4d904f48d63369fb0866a7b676fc782313b2633fcd025632425f6a"} Nov 27 17:13:04 crc kubenswrapper[4792]: I1127 17:13:04.876678 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d57cd" podStartSLOduration=2.905050413 podStartE2EDuration="58.876639764s" podCreationTimestamp="2025-11-27 17:12:06 +0000 UTC" firstStartedPulling="2025-11-27 17:12:08.330462569 +0000 UTC m=+150.673288887" lastFinishedPulling="2025-11-27 17:13:04.30205192 +0000 UTC m=+206.644878238" observedRunningTime="2025-11-27 17:13:04.875158469 +0000 UTC m=+207.217984807" watchObservedRunningTime="2025-11-27 17:13:04.876639764 +0000 UTC m=+207.219466102" Nov 27 17:13:04 crc kubenswrapper[4792]: I1127 17:13:04.917876 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c8djv" podStartSLOduration=2.6386198050000003 podStartE2EDuration="59.917855913s" podCreationTimestamp="2025-11-27 17:12:05 +0000 UTC" firstStartedPulling="2025-11-27 17:12:07.229678392 +0000 UTC m=+149.572504710" lastFinishedPulling="2025-11-27 17:13:04.5089145 +0000 UTC m=+206.851740818" observedRunningTime="2025-11-27 17:13:04.915368397 +0000 UTC m=+207.258194715" watchObservedRunningTime="2025-11-27 17:13:04.917855913 +0000 UTC m=+207.260682231" Nov 27 17:13:04 crc kubenswrapper[4792]: I1127 17:13:04.959589 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-62nl2" podStartSLOduration=3.017229332 podStartE2EDuration="55.959572908s" podCreationTimestamp="2025-11-27 17:12:09 +0000 UTC" firstStartedPulling="2025-11-27 17:12:11.437949034 +0000 UTC m=+153.780775352" lastFinishedPulling="2025-11-27 17:13:04.38029252 +0000 UTC m=+206.723118928" observedRunningTime="2025-11-27 17:13:04.939111663 +0000 UTC m=+207.281938001" watchObservedRunningTime="2025-11-27 17:13:04.959572908 +0000 UTC m=+207.302399226" Nov 27 17:13:05 crc kubenswrapper[4792]: I1127 17:13:05.895331 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqrlt" event={"ID":"bfe5d9aa-13db-4750-b440-dc1e83e149f0","Type":"ContainerStarted","Data":"725ea26916c53d493d00d06a9c17648d20012c1b959548ae06aea0188f7c04f2"} Nov 27 17:13:05 crc kubenswrapper[4792]: I1127 17:13:05.902812 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kt9ct" event={"ID":"a53d4e7e-b60e-4c7b-91ce-f025197188d8","Type":"ContainerStarted","Data":"f0a8404f9fd5632d7bc11ecc03defd033062051f4e6ded536c25c3a13421bbee"} Nov 27 17:13:05 crc kubenswrapper[4792]: I1127 17:13:05.925779 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tqrlt" podStartSLOduration=2.9249408900000002 podStartE2EDuration="59.925759654s" podCreationTimestamp="2025-11-27 17:12:06 +0000 UTC" firstStartedPulling="2025-11-27 17:12:08.353731969 +0000 UTC m=+150.696558287" lastFinishedPulling="2025-11-27 17:13:05.354550733 +0000 UTC m=+207.697377051" observedRunningTime="2025-11-27 17:13:05.924193526 +0000 UTC m=+208.267019834" watchObservedRunningTime="2025-11-27 17:13:05.925759654 +0000 UTC m=+208.268585972" Nov 27 17:13:05 crc kubenswrapper[4792]: I1127 17:13:05.953154 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kt9ct" podStartSLOduration=2.700184064 podStartE2EDuration="56.953135501s" podCreationTimestamp="2025-11-27 17:12:09 +0000 UTC" firstStartedPulling="2025-11-27 17:12:11.462187993 +0000 UTC m=+153.805014311" lastFinishedPulling="2025-11-27 17:13:05.71513943 +0000 UTC m=+208.057965748" observedRunningTime="2025-11-27 17:13:05.952740329 +0000 UTC m=+208.295566647" watchObservedRunningTime="2025-11-27 17:13:05.953135501 +0000 UTC m=+208.295961819" Nov 27 17:13:06 crc kubenswrapper[4792]: I1127 17:13:06.288388 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:13:06 crc kubenswrapper[4792]: I1127 17:13:06.288454 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:13:06 crc kubenswrapper[4792]: I1127 17:13:06.335366 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:13:06 crc kubenswrapper[4792]: I1127 17:13:06.445610 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:13:06 crc kubenswrapper[4792]: I1127 17:13:06.445675 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:13:06 crc kubenswrapper[4792]: I1127 17:13:06.742487 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:13:06 crc kubenswrapper[4792]: I1127 17:13:06.742548 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:13:06 crc kubenswrapper[4792]: I1127 17:13:06.783020 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:13:06 crc kubenswrapper[4792]: I1127 17:13:06.876613 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:13:06 crc kubenswrapper[4792]: I1127 17:13:06.876667 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:13:06 crc kubenswrapper[4792]: I1127 17:13:06.923977 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:13:07 crc kubenswrapper[4792]: I1127 17:13:07.481101 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tqrlt" podUID="bfe5d9aa-13db-4750-b440-dc1e83e149f0" containerName="registry-server" probeResult="failure" output=< Nov 27 17:13:07 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 17:13:07 crc kubenswrapper[4792]: > Nov 27 17:13:08 crc kubenswrapper[4792]: I1127 17:13:08.290401 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:13:08 crc kubenswrapper[4792]: I1127 17:13:08.291211 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:13:08 crc kubenswrapper[4792]: I1127 17:13:08.291357 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:13:08 crc kubenswrapper[4792]: I1127 17:13:08.292017 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 17:13:08 crc kubenswrapper[4792]: I1127 17:13:08.292179 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59" gracePeriod=600 Nov 27 17:13:09 crc kubenswrapper[4792]: I1127 17:13:09.647835 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:13:09 crc kubenswrapper[4792]: I1127 17:13:09.647895 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:13:09 crc kubenswrapper[4792]: I1127 17:13:09.923985 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59" exitCode=0 Nov 27 17:13:09 crc kubenswrapper[4792]: I1127 17:13:09.924085 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59"} Nov 27 17:13:10 crc kubenswrapper[4792]: I1127 17:13:10.040230 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:13:10 crc kubenswrapper[4792]: I1127 17:13:10.040725 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:13:10 crc kubenswrapper[4792]: I1127 17:13:10.097766 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:13:10 crc kubenswrapper[4792]: I1127 17:13:10.690716 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kt9ct" podUID="a53d4e7e-b60e-4c7b-91ce-f025197188d8" containerName="registry-server" probeResult="failure" output=< Nov 27 17:13:10 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 17:13:10 crc kubenswrapper[4792]: > Nov 27 17:13:10 crc kubenswrapper[4792]: I1127 17:13:10.932739 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"07733d4a5a2764e892f619318dddf4bb5833d9d78d072e993f8c20fe552da65d"} Nov 27 17:13:10 crc kubenswrapper[4792]: I1127 17:13:10.975262 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:13:13 crc kubenswrapper[4792]: I1127 17:13:13.179708 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-62nl2"] Nov 27 17:13:13 crc kubenswrapper[4792]: I1127 17:13:13.948121 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-62nl2" podUID="54d66969-c16d-43c7-adb3-64d00d1c451d" containerName="registry-server" containerID="cri-o://aa23f8bf584024fed934b4287ef80d7805c2bdcfcd3237a975bbd710d5ec8de6" gracePeriod=2 Nov 27 17:13:14 crc kubenswrapper[4792]: I1127 17:13:14.774852 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:13:14 crc kubenswrapper[4792]: I1127 17:13:14.930816 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d66969-c16d-43c7-adb3-64d00d1c451d-utilities\") pod \"54d66969-c16d-43c7-adb3-64d00d1c451d\" (UID: \"54d66969-c16d-43c7-adb3-64d00d1c451d\") " Nov 27 17:13:14 crc kubenswrapper[4792]: I1127 17:13:14.930918 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d66969-c16d-43c7-adb3-64d00d1c451d-catalog-content\") pod \"54d66969-c16d-43c7-adb3-64d00d1c451d\" (UID: \"54d66969-c16d-43c7-adb3-64d00d1c451d\") " Nov 27 17:13:14 crc kubenswrapper[4792]: I1127 17:13:14.931875 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54d66969-c16d-43c7-adb3-64d00d1c451d-utilities" (OuterVolumeSpecName: "utilities") pod "54d66969-c16d-43c7-adb3-64d00d1c451d" (UID: "54d66969-c16d-43c7-adb3-64d00d1c451d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:14 crc kubenswrapper[4792]: I1127 17:13:14.931878 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2vhn\" (UniqueName: \"kubernetes.io/projected/54d66969-c16d-43c7-adb3-64d00d1c451d-kube-api-access-d2vhn\") pod \"54d66969-c16d-43c7-adb3-64d00d1c451d\" (UID: \"54d66969-c16d-43c7-adb3-64d00d1c451d\") " Nov 27 17:13:14 crc kubenswrapper[4792]: I1127 17:13:14.932213 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d66969-c16d-43c7-adb3-64d00d1c451d-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:14 crc kubenswrapper[4792]: I1127 17:13:14.936631 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54d66969-c16d-43c7-adb3-64d00d1c451d-kube-api-access-d2vhn" (OuterVolumeSpecName: "kube-api-access-d2vhn") pod "54d66969-c16d-43c7-adb3-64d00d1c451d" (UID: "54d66969-c16d-43c7-adb3-64d00d1c451d"). InnerVolumeSpecName "kube-api-access-d2vhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:13:14 crc kubenswrapper[4792]: I1127 17:13:14.955622 4792 generic.go:334] "Generic (PLEG): container finished" podID="54d66969-c16d-43c7-adb3-64d00d1c451d" containerID="aa23f8bf584024fed934b4287ef80d7805c2bdcfcd3237a975bbd710d5ec8de6" exitCode=0 Nov 27 17:13:14 crc kubenswrapper[4792]: I1127 17:13:14.955677 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62nl2" event={"ID":"54d66969-c16d-43c7-adb3-64d00d1c451d","Type":"ContainerDied","Data":"aa23f8bf584024fed934b4287ef80d7805c2bdcfcd3237a975bbd710d5ec8de6"} Nov 27 17:13:14 crc kubenswrapper[4792]: I1127 17:13:14.955710 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62nl2" event={"ID":"54d66969-c16d-43c7-adb3-64d00d1c451d","Type":"ContainerDied","Data":"de26b9ef1e8cd22c5730ab9b07f390a6e6f222aab66c64b7a98c1897f3af9c60"} Nov 27 17:13:14 crc kubenswrapper[4792]: I1127 17:13:14.955737 4792 scope.go:117] "RemoveContainer" containerID="aa23f8bf584024fed934b4287ef80d7805c2bdcfcd3237a975bbd710d5ec8de6" Nov 27 17:13:14 crc kubenswrapper[4792]: I1127 17:13:14.955737 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62nl2" Nov 27 17:13:14 crc kubenswrapper[4792]: I1127 17:13:14.974478 4792 scope.go:117] "RemoveContainer" containerID="1c00eb4a554b8f997aca0995bedead080ce5afe25c6d1f14be38cedc1f6a2a68" Nov 27 17:13:14 crc kubenswrapper[4792]: I1127 17:13:14.987595 4792 scope.go:117] "RemoveContainer" containerID="f779b5035019815d4c72f11c36575d8d8fa81f3e4d3b0f7c28c7d5aecd2ca1f5" Nov 27 17:13:15 crc kubenswrapper[4792]: I1127 17:13:15.011803 4792 scope.go:117] "RemoveContainer" containerID="aa23f8bf584024fed934b4287ef80d7805c2bdcfcd3237a975bbd710d5ec8de6" Nov 27 17:13:15 crc kubenswrapper[4792]: E1127 17:13:15.012980 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa23f8bf584024fed934b4287ef80d7805c2bdcfcd3237a975bbd710d5ec8de6\": container with ID starting with aa23f8bf584024fed934b4287ef80d7805c2bdcfcd3237a975bbd710d5ec8de6 not found: ID does not exist" containerID="aa23f8bf584024fed934b4287ef80d7805c2bdcfcd3237a975bbd710d5ec8de6" Nov 27 17:13:15 crc kubenswrapper[4792]: I1127 17:13:15.013009 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa23f8bf584024fed934b4287ef80d7805c2bdcfcd3237a975bbd710d5ec8de6"} err="failed to get container status \"aa23f8bf584024fed934b4287ef80d7805c2bdcfcd3237a975bbd710d5ec8de6\": rpc error: code = NotFound desc = could not find container \"aa23f8bf584024fed934b4287ef80d7805c2bdcfcd3237a975bbd710d5ec8de6\": container with ID starting with aa23f8bf584024fed934b4287ef80d7805c2bdcfcd3237a975bbd710d5ec8de6 not found: ID does not exist" Nov 27 17:13:15 crc kubenswrapper[4792]: I1127 17:13:15.013053 4792 scope.go:117] "RemoveContainer" containerID="1c00eb4a554b8f997aca0995bedead080ce5afe25c6d1f14be38cedc1f6a2a68" Nov 27 17:13:15 crc kubenswrapper[4792]: E1127 17:13:15.013690 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c00eb4a554b8f997aca0995bedead080ce5afe25c6d1f14be38cedc1f6a2a68\": container with ID starting with 1c00eb4a554b8f997aca0995bedead080ce5afe25c6d1f14be38cedc1f6a2a68 not found: ID does not exist" containerID="1c00eb4a554b8f997aca0995bedead080ce5afe25c6d1f14be38cedc1f6a2a68" Nov 27 17:13:15 crc kubenswrapper[4792]: I1127 17:13:15.013754 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c00eb4a554b8f997aca0995bedead080ce5afe25c6d1f14be38cedc1f6a2a68"} err="failed to get container status \"1c00eb4a554b8f997aca0995bedead080ce5afe25c6d1f14be38cedc1f6a2a68\": rpc error: code = NotFound desc = could not find container \"1c00eb4a554b8f997aca0995bedead080ce5afe25c6d1f14be38cedc1f6a2a68\": container with ID starting with 1c00eb4a554b8f997aca0995bedead080ce5afe25c6d1f14be38cedc1f6a2a68 not found: ID does not exist" Nov 27 17:13:15 crc kubenswrapper[4792]: I1127 17:13:15.013797 4792 scope.go:117] "RemoveContainer" containerID="f779b5035019815d4c72f11c36575d8d8fa81f3e4d3b0f7c28c7d5aecd2ca1f5" Nov 27 17:13:15 crc kubenswrapper[4792]: E1127 17:13:15.014285 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f779b5035019815d4c72f11c36575d8d8fa81f3e4d3b0f7c28c7d5aecd2ca1f5\": container with ID starting with f779b5035019815d4c72f11c36575d8d8fa81f3e4d3b0f7c28c7d5aecd2ca1f5 not found: ID does not exist" containerID="f779b5035019815d4c72f11c36575d8d8fa81f3e4d3b0f7c28c7d5aecd2ca1f5" Nov 27 17:13:15 crc kubenswrapper[4792]: I1127 17:13:15.014307 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f779b5035019815d4c72f11c36575d8d8fa81f3e4d3b0f7c28c7d5aecd2ca1f5"} err="failed to get container status \"f779b5035019815d4c72f11c36575d8d8fa81f3e4d3b0f7c28c7d5aecd2ca1f5\": rpc error: code = NotFound desc = could not find container \"f779b5035019815d4c72f11c36575d8d8fa81f3e4d3b0f7c28c7d5aecd2ca1f5\": container with ID starting with f779b5035019815d4c72f11c36575d8d8fa81f3e4d3b0f7c28c7d5aecd2ca1f5 not found: ID does not exist" Nov 27 17:13:15 crc kubenswrapper[4792]: I1127 17:13:15.040418 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2vhn\" (UniqueName: \"kubernetes.io/projected/54d66969-c16d-43c7-adb3-64d00d1c451d-kube-api-access-d2vhn\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:15 crc kubenswrapper[4792]: I1127 17:13:15.062201 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54d66969-c16d-43c7-adb3-64d00d1c451d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54d66969-c16d-43c7-adb3-64d00d1c451d" (UID: "54d66969-c16d-43c7-adb3-64d00d1c451d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:15 crc kubenswrapper[4792]: I1127 17:13:15.141117 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d66969-c16d-43c7-adb3-64d00d1c451d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:15 crc kubenswrapper[4792]: I1127 17:13:15.296801 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-62nl2"] Nov 27 17:13:15 crc kubenswrapper[4792]: I1127 17:13:15.301697 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-62nl2"] Nov 27 17:13:16 crc kubenswrapper[4792]: I1127 17:13:16.334107 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:13:16 crc kubenswrapper[4792]: I1127 17:13:16.491833 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:13:16 crc kubenswrapper[4792]: I1127 17:13:16.541595 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:13:16 crc kubenswrapper[4792]: I1127 17:13:16.695350 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54d66969-c16d-43c7-adb3-64d00d1c451d" path="/var/lib/kubelet/pods/54d66969-c16d-43c7-adb3-64d00d1c451d/volumes" Nov 27 17:13:16 crc kubenswrapper[4792]: I1127 17:13:16.780712 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:13:16 crc kubenswrapper[4792]: I1127 17:13:16.918242 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:13:18 crc kubenswrapper[4792]: I1127 17:13:18.582859 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d57cd"] Nov 27 17:13:18 crc kubenswrapper[4792]: I1127 17:13:18.583319 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d57cd" podUID="a9d4ac25-3b3d-44a2-b384-0cc385078f80" containerName="registry-server" containerID="cri-o://b2740ea9ab30fc41eda080faec1fb990391a9a960771da2c28f710dfb300dafc" gracePeriod=2 Nov 27 17:13:18 crc kubenswrapper[4792]: I1127 17:13:18.612416 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jz8bh"] Nov 27 17:13:18 crc kubenswrapper[4792]: I1127 17:13:18.783430 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6p66q"] Nov 27 17:13:18 crc kubenswrapper[4792]: I1127 17:13:18.783691 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6p66q" podUID="6879b0d9-3b16-4233-a329-ed6fd9c58bd8" containerName="registry-server" containerID="cri-o://6dc4b8cfede4553ded8c18fea128a4a04f434a6c1917874b8810591ac76d4d89" gracePeriod=2 Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.447481 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.499274 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9d4ac25-3b3d-44a2-b384-0cc385078f80-catalog-content\") pod \"a9d4ac25-3b3d-44a2-b384-0cc385078f80\" (UID: \"a9d4ac25-3b3d-44a2-b384-0cc385078f80\") " Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.499368 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk6q9\" (UniqueName: \"kubernetes.io/projected/a9d4ac25-3b3d-44a2-b384-0cc385078f80-kube-api-access-sk6q9\") pod \"a9d4ac25-3b3d-44a2-b384-0cc385078f80\" (UID: \"a9d4ac25-3b3d-44a2-b384-0cc385078f80\") " Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.499421 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9d4ac25-3b3d-44a2-b384-0cc385078f80-utilities\") pod \"a9d4ac25-3b3d-44a2-b384-0cc385078f80\" (UID: \"a9d4ac25-3b3d-44a2-b384-0cc385078f80\") " Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.500278 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9d4ac25-3b3d-44a2-b384-0cc385078f80-utilities" (OuterVolumeSpecName: "utilities") pod "a9d4ac25-3b3d-44a2-b384-0cc385078f80" (UID: "a9d4ac25-3b3d-44a2-b384-0cc385078f80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.504844 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d4ac25-3b3d-44a2-b384-0cc385078f80-kube-api-access-sk6q9" (OuterVolumeSpecName: "kube-api-access-sk6q9") pod "a9d4ac25-3b3d-44a2-b384-0cc385078f80" (UID: "a9d4ac25-3b3d-44a2-b384-0cc385078f80"). InnerVolumeSpecName "kube-api-access-sk6q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.547312 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9d4ac25-3b3d-44a2-b384-0cc385078f80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9d4ac25-3b3d-44a2-b384-0cc385078f80" (UID: "a9d4ac25-3b3d-44a2-b384-0cc385078f80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.601139 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9d4ac25-3b3d-44a2-b384-0cc385078f80-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.601591 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk6q9\" (UniqueName: \"kubernetes.io/projected/a9d4ac25-3b3d-44a2-b384-0cc385078f80-kube-api-access-sk6q9\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.601605 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9d4ac25-3b3d-44a2-b384-0cc385078f80-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.704027 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.738056 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.757164 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.906158 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qkkj\" (UniqueName: \"kubernetes.io/projected/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-kube-api-access-9qkkj\") pod \"6879b0d9-3b16-4233-a329-ed6fd9c58bd8\" (UID: \"6879b0d9-3b16-4233-a329-ed6fd9c58bd8\") " Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.906278 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-catalog-content\") pod \"6879b0d9-3b16-4233-a329-ed6fd9c58bd8\" (UID: \"6879b0d9-3b16-4233-a329-ed6fd9c58bd8\") " Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.906312 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-utilities\") pod \"6879b0d9-3b16-4233-a329-ed6fd9c58bd8\" (UID: \"6879b0d9-3b16-4233-a329-ed6fd9c58bd8\") " Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.907252 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-utilities" (OuterVolumeSpecName: "utilities") pod "6879b0d9-3b16-4233-a329-ed6fd9c58bd8" (UID: "6879b0d9-3b16-4233-a329-ed6fd9c58bd8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.912225 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-kube-api-access-9qkkj" (OuterVolumeSpecName: "kube-api-access-9qkkj") pod "6879b0d9-3b16-4233-a329-ed6fd9c58bd8" (UID: "6879b0d9-3b16-4233-a329-ed6fd9c58bd8"). InnerVolumeSpecName "kube-api-access-9qkkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.965350 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6879b0d9-3b16-4233-a329-ed6fd9c58bd8" (UID: "6879b0d9-3b16-4233-a329-ed6fd9c58bd8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.981088 4792 generic.go:334] "Generic (PLEG): container finished" podID="6879b0d9-3b16-4233-a329-ed6fd9c58bd8" containerID="6dc4b8cfede4553ded8c18fea128a4a04f434a6c1917874b8810591ac76d4d89" exitCode=0 Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.981234 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6p66q" event={"ID":"6879b0d9-3b16-4233-a329-ed6fd9c58bd8","Type":"ContainerDied","Data":"6dc4b8cfede4553ded8c18fea128a4a04f434a6c1917874b8810591ac76d4d89"} Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.981677 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6p66q" event={"ID":"6879b0d9-3b16-4233-a329-ed6fd9c58bd8","Type":"ContainerDied","Data":"4a8fa6214b633f8e7edd42fe7255909961e363aff6fd4be87f98134f6318857e"} Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.981319 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6p66q" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.981741 4792 scope.go:117] "RemoveContainer" containerID="6dc4b8cfede4553ded8c18fea128a4a04f434a6c1917874b8810591ac76d4d89" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.984055 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9d4ac25-3b3d-44a2-b384-0cc385078f80" containerID="b2740ea9ab30fc41eda080faec1fb990391a9a960771da2c28f710dfb300dafc" exitCode=0 Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.984228 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d57cd" Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.984465 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d57cd" event={"ID":"a9d4ac25-3b3d-44a2-b384-0cc385078f80","Type":"ContainerDied","Data":"b2740ea9ab30fc41eda080faec1fb990391a9a960771da2c28f710dfb300dafc"} Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.984486 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d57cd" event={"ID":"a9d4ac25-3b3d-44a2-b384-0cc385078f80","Type":"ContainerDied","Data":"eaae724b1a2954d2654a58961748ef48a7a67d8430badc487acd99918a6a5adb"} Nov 27 17:13:19 crc kubenswrapper[4792]: I1127 17:13:19.997817 4792 scope.go:117] "RemoveContainer" containerID="3764f25969790377deb96888c26e5f6b2d60140362c8a7e89ca16484a64ae54e" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.007059 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qkkj\" (UniqueName: \"kubernetes.io/projected/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-kube-api-access-9qkkj\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.007084 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.007115 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6879b0d9-3b16-4233-a329-ed6fd9c58bd8-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.014225 4792 scope.go:117] "RemoveContainer" containerID="702f97a143d376780da8086c403115f0711184b9fc927d31e67982f1036f0dc1" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.018260 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6p66q"] Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.022145 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6p66q"] Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.027454 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d57cd"] Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.031542 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d57cd"] Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.041295 4792 scope.go:117] "RemoveContainer" containerID="6dc4b8cfede4553ded8c18fea128a4a04f434a6c1917874b8810591ac76d4d89" Nov 27 17:13:20 crc kubenswrapper[4792]: E1127 17:13:20.041886 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dc4b8cfede4553ded8c18fea128a4a04f434a6c1917874b8810591ac76d4d89\": container with ID starting with 6dc4b8cfede4553ded8c18fea128a4a04f434a6c1917874b8810591ac76d4d89 not found: ID does not exist" containerID="6dc4b8cfede4553ded8c18fea128a4a04f434a6c1917874b8810591ac76d4d89" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.041976 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dc4b8cfede4553ded8c18fea128a4a04f434a6c1917874b8810591ac76d4d89"} err="failed to get container status \"6dc4b8cfede4553ded8c18fea128a4a04f434a6c1917874b8810591ac76d4d89\": rpc error: code = NotFound desc = could not find container \"6dc4b8cfede4553ded8c18fea128a4a04f434a6c1917874b8810591ac76d4d89\": container with ID starting with 6dc4b8cfede4553ded8c18fea128a4a04f434a6c1917874b8810591ac76d4d89 not found: ID does not exist" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.042064 4792 scope.go:117] "RemoveContainer" containerID="3764f25969790377deb96888c26e5f6b2d60140362c8a7e89ca16484a64ae54e" Nov 27 17:13:20 crc kubenswrapper[4792]: E1127 17:13:20.042351 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3764f25969790377deb96888c26e5f6b2d60140362c8a7e89ca16484a64ae54e\": container with ID starting with 3764f25969790377deb96888c26e5f6b2d60140362c8a7e89ca16484a64ae54e not found: ID does not exist" containerID="3764f25969790377deb96888c26e5f6b2d60140362c8a7e89ca16484a64ae54e" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.042428 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3764f25969790377deb96888c26e5f6b2d60140362c8a7e89ca16484a64ae54e"} err="failed to get container status \"3764f25969790377deb96888c26e5f6b2d60140362c8a7e89ca16484a64ae54e\": rpc error: code = NotFound desc = could not find container \"3764f25969790377deb96888c26e5f6b2d60140362c8a7e89ca16484a64ae54e\": container with ID starting with 3764f25969790377deb96888c26e5f6b2d60140362c8a7e89ca16484a64ae54e not found: ID does not exist" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.042491 4792 scope.go:117] "RemoveContainer" containerID="702f97a143d376780da8086c403115f0711184b9fc927d31e67982f1036f0dc1" Nov 27 17:13:20 crc kubenswrapper[4792]: E1127 17:13:20.042992 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"702f97a143d376780da8086c403115f0711184b9fc927d31e67982f1036f0dc1\": container with ID starting with 702f97a143d376780da8086c403115f0711184b9fc927d31e67982f1036f0dc1 not found: ID does not exist" containerID="702f97a143d376780da8086c403115f0711184b9fc927d31e67982f1036f0dc1" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.043012 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"702f97a143d376780da8086c403115f0711184b9fc927d31e67982f1036f0dc1"} err="failed to get container status \"702f97a143d376780da8086c403115f0711184b9fc927d31e67982f1036f0dc1\": rpc error: code = NotFound desc = could not find container \"702f97a143d376780da8086c403115f0711184b9fc927d31e67982f1036f0dc1\": container with ID starting with 702f97a143d376780da8086c403115f0711184b9fc927d31e67982f1036f0dc1 not found: ID does not exist" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.043025 4792 scope.go:117] "RemoveContainer" containerID="b2740ea9ab30fc41eda080faec1fb990391a9a960771da2c28f710dfb300dafc" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.059941 4792 scope.go:117] "RemoveContainer" containerID="56646d8e23bf2da2970185bec17ef3505ba4872a4c9a819a4db506b91023c393" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.073687 4792 scope.go:117] "RemoveContainer" containerID="f2d0867b4f7f2c1c5f6d1db22d8984e09b9abe2a6e4bf56a4d082b9395cf3682" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.091014 4792 scope.go:117] "RemoveContainer" containerID="b2740ea9ab30fc41eda080faec1fb990391a9a960771da2c28f710dfb300dafc" Nov 27 17:13:20 crc kubenswrapper[4792]: E1127 17:13:20.091442 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2740ea9ab30fc41eda080faec1fb990391a9a960771da2c28f710dfb300dafc\": container with ID starting with b2740ea9ab30fc41eda080faec1fb990391a9a960771da2c28f710dfb300dafc not found: ID does not exist" containerID="b2740ea9ab30fc41eda080faec1fb990391a9a960771da2c28f710dfb300dafc" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.091558 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2740ea9ab30fc41eda080faec1fb990391a9a960771da2c28f710dfb300dafc"} err="failed to get container status \"b2740ea9ab30fc41eda080faec1fb990391a9a960771da2c28f710dfb300dafc\": rpc error: code = NotFound desc = could not find container \"b2740ea9ab30fc41eda080faec1fb990391a9a960771da2c28f710dfb300dafc\": container with ID starting with b2740ea9ab30fc41eda080faec1fb990391a9a960771da2c28f710dfb300dafc not found: ID does not exist" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.091709 4792 scope.go:117] "RemoveContainer" containerID="56646d8e23bf2da2970185bec17ef3505ba4872a4c9a819a4db506b91023c393" Nov 27 17:13:20 crc kubenswrapper[4792]: E1127 17:13:20.092081 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56646d8e23bf2da2970185bec17ef3505ba4872a4c9a819a4db506b91023c393\": container with ID starting with 56646d8e23bf2da2970185bec17ef3505ba4872a4c9a819a4db506b91023c393 not found: ID does not exist" containerID="56646d8e23bf2da2970185bec17ef3505ba4872a4c9a819a4db506b91023c393" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.092114 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56646d8e23bf2da2970185bec17ef3505ba4872a4c9a819a4db506b91023c393"} err="failed to get container status \"56646d8e23bf2da2970185bec17ef3505ba4872a4c9a819a4db506b91023c393\": rpc error: code = NotFound desc = could not find container \"56646d8e23bf2da2970185bec17ef3505ba4872a4c9a819a4db506b91023c393\": container with ID starting with 56646d8e23bf2da2970185bec17ef3505ba4872a4c9a819a4db506b91023c393 not found: ID does not exist" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.092139 4792 scope.go:117] "RemoveContainer" containerID="f2d0867b4f7f2c1c5f6d1db22d8984e09b9abe2a6e4bf56a4d082b9395cf3682" Nov 27 17:13:20 crc kubenswrapper[4792]: E1127 17:13:20.092869 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2d0867b4f7f2c1c5f6d1db22d8984e09b9abe2a6e4bf56a4d082b9395cf3682\": container with ID starting with f2d0867b4f7f2c1c5f6d1db22d8984e09b9abe2a6e4bf56a4d082b9395cf3682 not found: ID does not exist" containerID="f2d0867b4f7f2c1c5f6d1db22d8984e09b9abe2a6e4bf56a4d082b9395cf3682" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.092895 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2d0867b4f7f2c1c5f6d1db22d8984e09b9abe2a6e4bf56a4d082b9395cf3682"} err="failed to get container status \"f2d0867b4f7f2c1c5f6d1db22d8984e09b9abe2a6e4bf56a4d082b9395cf3682\": rpc error: code = NotFound desc = could not find container \"f2d0867b4f7f2c1c5f6d1db22d8984e09b9abe2a6e4bf56a4d082b9395cf3682\": container with ID starting with f2d0867b4f7f2c1c5f6d1db22d8984e09b9abe2a6e4bf56a4d082b9395cf3682 not found: ID does not exist" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.701232 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6879b0d9-3b16-4233-a329-ed6fd9c58bd8" path="/var/lib/kubelet/pods/6879b0d9-3b16-4233-a329-ed6fd9c58bd8/volumes" Nov 27 17:13:20 crc kubenswrapper[4792]: I1127 17:13:20.702364 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9d4ac25-3b3d-44a2-b384-0cc385078f80" path="/var/lib/kubelet/pods/a9d4ac25-3b3d-44a2-b384-0cc385078f80/volumes" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.459950 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c8djv"] Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.460587 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c8djv" podUID="0aacc646-59f3-41fe-b59b-ce5fed81861f" containerName="registry-server" containerID="cri-o://00f2a9869f5cf265615b2f2ea337c72b94a53dc5b0545e62b53d7703fe278825" gracePeriod=30 Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.475525 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tqrlt"] Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.475809 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tqrlt" podUID="bfe5d9aa-13db-4750-b440-dc1e83e149f0" containerName="registry-server" containerID="cri-o://725ea26916c53d493d00d06a9c17648d20012c1b959548ae06aea0188f7c04f2" gracePeriod=30 Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.480807 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7psgx"] Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.481026 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" podUID="228abb37-ff66-48b3-a882-d67ca901a322" containerName="marketplace-operator" containerID="cri-o://1c41c2c2f100456748486f3629080fcf52f3a10e833b71a29c230ec50caa4410" gracePeriod=30 Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.485128 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kfjw"] Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.485352 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8kfjw" podUID="28aedb43-5391-4868-b839-54e2857d62c7" containerName="registry-server" containerID="cri-o://34699664bdfbb62220f3ff185873bd433e4793a7bf609a3c8cbdf0bad7b9e931" gracePeriod=30 Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.511984 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wjl66"] Nov 27 17:13:21 crc kubenswrapper[4792]: E1127 17:13:21.514699 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2b6ebf-ba92-4d93-b98c-8df09cea427e" containerName="registry-server" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.514751 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2b6ebf-ba92-4d93-b98c-8df09cea427e" containerName="registry-server" Nov 27 17:13:21 crc kubenswrapper[4792]: E1127 17:13:21.514771 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d66969-c16d-43c7-adb3-64d00d1c451d" containerName="extract-utilities" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.514778 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d66969-c16d-43c7-adb3-64d00d1c451d" containerName="extract-utilities" Nov 27 17:13:21 crc kubenswrapper[4792]: E1127 17:13:21.514786 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d66969-c16d-43c7-adb3-64d00d1c451d" containerName="extract-content" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.514791 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d66969-c16d-43c7-adb3-64d00d1c451d" containerName="extract-content" Nov 27 17:13:21 crc kubenswrapper[4792]: E1127 17:13:21.514805 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d4ac25-3b3d-44a2-b384-0cc385078f80" containerName="registry-server" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.514811 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d4ac25-3b3d-44a2-b384-0cc385078f80" containerName="registry-server" Nov 27 17:13:21 crc kubenswrapper[4792]: E1127 17:13:21.514820 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6879b0d9-3b16-4233-a329-ed6fd9c58bd8" containerName="extract-utilities" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.514826 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6879b0d9-3b16-4233-a329-ed6fd9c58bd8" containerName="extract-utilities" Nov 27 17:13:21 crc kubenswrapper[4792]: E1127 17:13:21.514834 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d4ac25-3b3d-44a2-b384-0cc385078f80" containerName="extract-content" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.514839 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d4ac25-3b3d-44a2-b384-0cc385078f80" containerName="extract-content" Nov 27 17:13:21 crc kubenswrapper[4792]: E1127 17:13:21.514846 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d66969-c16d-43c7-adb3-64d00d1c451d" containerName="registry-server" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.514852 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d66969-c16d-43c7-adb3-64d00d1c451d" containerName="registry-server" Nov 27 17:13:21 crc kubenswrapper[4792]: E1127 17:13:21.514863 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2b6ebf-ba92-4d93-b98c-8df09cea427e" containerName="extract-utilities" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.514869 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2b6ebf-ba92-4d93-b98c-8df09cea427e" containerName="extract-utilities" Nov 27 17:13:21 crc kubenswrapper[4792]: E1127 17:13:21.514877 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2b6ebf-ba92-4d93-b98c-8df09cea427e" containerName="extract-content" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.514882 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2b6ebf-ba92-4d93-b98c-8df09cea427e" containerName="extract-content" Nov 27 17:13:21 crc kubenswrapper[4792]: E1127 17:13:21.514891 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6879b0d9-3b16-4233-a329-ed6fd9c58bd8" containerName="extract-content" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.514896 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6879b0d9-3b16-4233-a329-ed6fd9c58bd8" containerName="extract-content" Nov 27 17:13:21 crc kubenswrapper[4792]: E1127 17:13:21.514904 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6879b0d9-3b16-4233-a329-ed6fd9c58bd8" containerName="registry-server" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.514910 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6879b0d9-3b16-4233-a329-ed6fd9c58bd8" containerName="registry-server" Nov 27 17:13:21 crc kubenswrapper[4792]: E1127 17:13:21.514919 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d4ac25-3b3d-44a2-b384-0cc385078f80" containerName="extract-utilities" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.514925 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d4ac25-3b3d-44a2-b384-0cc385078f80" containerName="extract-utilities" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.515086 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d66969-c16d-43c7-adb3-64d00d1c451d" containerName="registry-server" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.515105 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9d4ac25-3b3d-44a2-b384-0cc385078f80" containerName="registry-server" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.515115 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6879b0d9-3b16-4233-a329-ed6fd9c58bd8" containerName="registry-server" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.515127 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2b6ebf-ba92-4d93-b98c-8df09cea427e" containerName="registry-server" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.515694 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.517547 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kt9ct"] Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.517927 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kt9ct" podUID="a53d4e7e-b60e-4c7b-91ce-f025197188d8" containerName="registry-server" containerID="cri-o://f0a8404f9fd5632d7bc11ecc03defd033062051f4e6ded536c25c3a13421bbee" gracePeriod=30 Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.520333 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wjl66"] Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.524631 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cb69df9-1d51-439c-bb3c-c17bd951bde3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wjl66\" (UID: \"4cb69df9-1d51-439c-bb3c-c17bd951bde3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.524690 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wlx6\" (UniqueName: \"kubernetes.io/projected/4cb69df9-1d51-439c-bb3c-c17bd951bde3-kube-api-access-5wlx6\") pod \"marketplace-operator-79b997595-wjl66\" (UID: \"4cb69df9-1d51-439c-bb3c-c17bd951bde3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.524743 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4cb69df9-1d51-439c-bb3c-c17bd951bde3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wjl66\" (UID: \"4cb69df9-1d51-439c-bb3c-c17bd951bde3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.625662 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4cb69df9-1d51-439c-bb3c-c17bd951bde3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wjl66\" (UID: \"4cb69df9-1d51-439c-bb3c-c17bd951bde3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.625728 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cb69df9-1d51-439c-bb3c-c17bd951bde3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wjl66\" (UID: \"4cb69df9-1d51-439c-bb3c-c17bd951bde3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.625783 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wlx6\" (UniqueName: \"kubernetes.io/projected/4cb69df9-1d51-439c-bb3c-c17bd951bde3-kube-api-access-5wlx6\") pod \"marketplace-operator-79b997595-wjl66\" (UID: \"4cb69df9-1d51-439c-bb3c-c17bd951bde3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.627126 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cb69df9-1d51-439c-bb3c-c17bd951bde3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wjl66\" (UID: \"4cb69df9-1d51-439c-bb3c-c17bd951bde3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.631434 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4cb69df9-1d51-439c-bb3c-c17bd951bde3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wjl66\" (UID: \"4cb69df9-1d51-439c-bb3c-c17bd951bde3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.643986 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wlx6\" (UniqueName: \"kubernetes.io/projected/4cb69df9-1d51-439c-bb3c-c17bd951bde3-kube-api-access-5wlx6\") pod \"marketplace-operator-79b997595-wjl66\" (UID: \"4cb69df9-1d51-439c-bb3c-c17bd951bde3\") " pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" Nov 27 17:13:21 crc kubenswrapper[4792]: I1127 17:13:21.924460 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.002620 4792 generic.go:334] "Generic (PLEG): container finished" podID="28aedb43-5391-4868-b839-54e2857d62c7" containerID="34699664bdfbb62220f3ff185873bd433e4793a7bf609a3c8cbdf0bad7b9e931" exitCode=0 Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.002713 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kfjw" event={"ID":"28aedb43-5391-4868-b839-54e2857d62c7","Type":"ContainerDied","Data":"34699664bdfbb62220f3ff185873bd433e4793a7bf609a3c8cbdf0bad7b9e931"} Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.002766 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kfjw" event={"ID":"28aedb43-5391-4868-b839-54e2857d62c7","Type":"ContainerDied","Data":"d40171afb4968c78361d637012b466dc67e81a587304f68eaa47dedd08eabd69"} Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.002779 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d40171afb4968c78361d637012b466dc67e81a587304f68eaa47dedd08eabd69" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.004352 4792 generic.go:334] "Generic (PLEG): container finished" podID="228abb37-ff66-48b3-a882-d67ca901a322" containerID="1c41c2c2f100456748486f3629080fcf52f3a10e833b71a29c230ec50caa4410" exitCode=0 Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.004396 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" event={"ID":"228abb37-ff66-48b3-a882-d67ca901a322","Type":"ContainerDied","Data":"1c41c2c2f100456748486f3629080fcf52f3a10e833b71a29c230ec50caa4410"} Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.007206 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8djv" event={"ID":"0aacc646-59f3-41fe-b59b-ce5fed81861f","Type":"ContainerDied","Data":"00f2a9869f5cf265615b2f2ea337c72b94a53dc5b0545e62b53d7703fe278825"} Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.007217 4792 generic.go:334] "Generic (PLEG): container finished" podID="0aacc646-59f3-41fe-b59b-ce5fed81861f" containerID="00f2a9869f5cf265615b2f2ea337c72b94a53dc5b0545e62b53d7703fe278825" exitCode=0 Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.011087 4792 generic.go:334] "Generic (PLEG): container finished" podID="bfe5d9aa-13db-4750-b440-dc1e83e149f0" containerID="725ea26916c53d493d00d06a9c17648d20012c1b959548ae06aea0188f7c04f2" exitCode=0 Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.011127 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqrlt" event={"ID":"bfe5d9aa-13db-4750-b440-dc1e83e149f0","Type":"ContainerDied","Data":"725ea26916c53d493d00d06a9c17648d20012c1b959548ae06aea0188f7c04f2"} Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.015596 4792 generic.go:334] "Generic (PLEG): container finished" podID="a53d4e7e-b60e-4c7b-91ce-f025197188d8" containerID="f0a8404f9fd5632d7bc11ecc03defd033062051f4e6ded536c25c3a13421bbee" exitCode=0 Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.015612 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kt9ct" event={"ID":"a53d4e7e-b60e-4c7b-91ce-f025197188d8","Type":"ContainerDied","Data":"f0a8404f9fd5632d7bc11ecc03defd033062051f4e6ded536c25c3a13421bbee"} Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.017718 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.131548 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp5bb\" (UniqueName: \"kubernetes.io/projected/28aedb43-5391-4868-b839-54e2857d62c7-kube-api-access-lp5bb\") pod \"28aedb43-5391-4868-b839-54e2857d62c7\" (UID: \"28aedb43-5391-4868-b839-54e2857d62c7\") " Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.131600 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28aedb43-5391-4868-b839-54e2857d62c7-utilities\") pod \"28aedb43-5391-4868-b839-54e2857d62c7\" (UID: \"28aedb43-5391-4868-b839-54e2857d62c7\") " Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.131682 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28aedb43-5391-4868-b839-54e2857d62c7-catalog-content\") pod \"28aedb43-5391-4868-b839-54e2857d62c7\" (UID: \"28aedb43-5391-4868-b839-54e2857d62c7\") " Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.134625 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28aedb43-5391-4868-b839-54e2857d62c7-utilities" (OuterVolumeSpecName: "utilities") pod "28aedb43-5391-4868-b839-54e2857d62c7" (UID: "28aedb43-5391-4868-b839-54e2857d62c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.137780 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28aedb43-5391-4868-b839-54e2857d62c7-kube-api-access-lp5bb" (OuterVolumeSpecName: "kube-api-access-lp5bb") pod "28aedb43-5391-4868-b839-54e2857d62c7" (UID: "28aedb43-5391-4868-b839-54e2857d62c7"). InnerVolumeSpecName "kube-api-access-lp5bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.154421 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28aedb43-5391-4868-b839-54e2857d62c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28aedb43-5391-4868-b839-54e2857d62c7" (UID: "28aedb43-5391-4868-b839-54e2857d62c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.162894 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wjl66"] Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.233296 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp5bb\" (UniqueName: \"kubernetes.io/projected/28aedb43-5391-4868-b839-54e2857d62c7-kube-api-access-lp5bb\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.233524 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28aedb43-5391-4868-b839-54e2857d62c7-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.233535 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28aedb43-5391-4868-b839-54e2857d62c7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.431041 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.440582 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/228abb37-ff66-48b3-a882-d67ca901a322-marketplace-operator-metrics\") pod \"228abb37-ff66-48b3-a882-d67ca901a322\" (UID: \"228abb37-ff66-48b3-a882-d67ca901a322\") " Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.440675 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsgc8\" (UniqueName: \"kubernetes.io/projected/228abb37-ff66-48b3-a882-d67ca901a322-kube-api-access-dsgc8\") pod \"228abb37-ff66-48b3-a882-d67ca901a322\" (UID: \"228abb37-ff66-48b3-a882-d67ca901a322\") " Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.440756 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/228abb37-ff66-48b3-a882-d67ca901a322-marketplace-trusted-ca\") pod \"228abb37-ff66-48b3-a882-d67ca901a322\" (UID: \"228abb37-ff66-48b3-a882-d67ca901a322\") " Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.441875 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/228abb37-ff66-48b3-a882-d67ca901a322-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "228abb37-ff66-48b3-a882-d67ca901a322" (UID: "228abb37-ff66-48b3-a882-d67ca901a322"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.453302 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/228abb37-ff66-48b3-a882-d67ca901a322-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "228abb37-ff66-48b3-a882-d67ca901a322" (UID: "228abb37-ff66-48b3-a882-d67ca901a322"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.453451 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/228abb37-ff66-48b3-a882-d67ca901a322-kube-api-access-dsgc8" (OuterVolumeSpecName: "kube-api-access-dsgc8") pod "228abb37-ff66-48b3-a882-d67ca901a322" (UID: "228abb37-ff66-48b3-a882-d67ca901a322"). InnerVolumeSpecName "kube-api-access-dsgc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.491949 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.500299 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.506813 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.541740 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpmcr\" (UniqueName: \"kubernetes.io/projected/a53d4e7e-b60e-4c7b-91ce-f025197188d8-kube-api-access-vpmcr\") pod \"a53d4e7e-b60e-4c7b-91ce-f025197188d8\" (UID: \"a53d4e7e-b60e-4c7b-91ce-f025197188d8\") " Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.541817 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe5d9aa-13db-4750-b440-dc1e83e149f0-catalog-content\") pod \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\" (UID: \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\") " Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.541841 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a53d4e7e-b60e-4c7b-91ce-f025197188d8-catalog-content\") pod \"a53d4e7e-b60e-4c7b-91ce-f025197188d8\" (UID: \"a53d4e7e-b60e-4c7b-91ce-f025197188d8\") " Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.541867 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aacc646-59f3-41fe-b59b-ce5fed81861f-catalog-content\") pod \"0aacc646-59f3-41fe-b59b-ce5fed81861f\" (UID: \"0aacc646-59f3-41fe-b59b-ce5fed81861f\") " Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.541897 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v45k\" (UniqueName: \"kubernetes.io/projected/0aacc646-59f3-41fe-b59b-ce5fed81861f-kube-api-access-9v45k\") pod \"0aacc646-59f3-41fe-b59b-ce5fed81861f\" (UID: \"0aacc646-59f3-41fe-b59b-ce5fed81861f\") " Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.541920 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqz5z\" (UniqueName: \"kubernetes.io/projected/bfe5d9aa-13db-4750-b440-dc1e83e149f0-kube-api-access-lqz5z\") pod \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\" (UID: \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\") " Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.541962 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe5d9aa-13db-4750-b440-dc1e83e149f0-utilities\") pod \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\" (UID: \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\") " Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.541994 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a53d4e7e-b60e-4c7b-91ce-f025197188d8-utilities\") pod \"a53d4e7e-b60e-4c7b-91ce-f025197188d8\" (UID: \"a53d4e7e-b60e-4c7b-91ce-f025197188d8\") " Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.542020 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aacc646-59f3-41fe-b59b-ce5fed81861f-utilities\") pod \"0aacc646-59f3-41fe-b59b-ce5fed81861f\" (UID: \"0aacc646-59f3-41fe-b59b-ce5fed81861f\") " Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.542180 4792 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/228abb37-ff66-48b3-a882-d67ca901a322-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.542198 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsgc8\" (UniqueName: \"kubernetes.io/projected/228abb37-ff66-48b3-a882-d67ca901a322-kube-api-access-dsgc8\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.542210 4792 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/228abb37-ff66-48b3-a882-d67ca901a322-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.542979 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aacc646-59f3-41fe-b59b-ce5fed81861f-utilities" (OuterVolumeSpecName: "utilities") pod "0aacc646-59f3-41fe-b59b-ce5fed81861f" (UID: "0aacc646-59f3-41fe-b59b-ce5fed81861f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.545501 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe5d9aa-13db-4750-b440-dc1e83e149f0-kube-api-access-lqz5z" (OuterVolumeSpecName: "kube-api-access-lqz5z") pod "bfe5d9aa-13db-4750-b440-dc1e83e149f0" (UID: "bfe5d9aa-13db-4750-b440-dc1e83e149f0"). InnerVolumeSpecName "kube-api-access-lqz5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.545557 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a53d4e7e-b60e-4c7b-91ce-f025197188d8-kube-api-access-vpmcr" (OuterVolumeSpecName: "kube-api-access-vpmcr") pod "a53d4e7e-b60e-4c7b-91ce-f025197188d8" (UID: "a53d4e7e-b60e-4c7b-91ce-f025197188d8"). InnerVolumeSpecName "kube-api-access-vpmcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.545702 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aacc646-59f3-41fe-b59b-ce5fed81861f-kube-api-access-9v45k" (OuterVolumeSpecName: "kube-api-access-9v45k") pod "0aacc646-59f3-41fe-b59b-ce5fed81861f" (UID: "0aacc646-59f3-41fe-b59b-ce5fed81861f"). InnerVolumeSpecName "kube-api-access-9v45k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.546796 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfe5d9aa-13db-4750-b440-dc1e83e149f0-utilities" (OuterVolumeSpecName: "utilities") pod "bfe5d9aa-13db-4750-b440-dc1e83e149f0" (UID: "bfe5d9aa-13db-4750-b440-dc1e83e149f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.549229 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a53d4e7e-b60e-4c7b-91ce-f025197188d8-utilities" (OuterVolumeSpecName: "utilities") pod "a53d4e7e-b60e-4c7b-91ce-f025197188d8" (UID: "a53d4e7e-b60e-4c7b-91ce-f025197188d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.624159 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aacc646-59f3-41fe-b59b-ce5fed81861f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0aacc646-59f3-41fe-b59b-ce5fed81861f" (UID: "0aacc646-59f3-41fe-b59b-ce5fed81861f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.642935 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfe5d9aa-13db-4750-b440-dc1e83e149f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfe5d9aa-13db-4750-b440-dc1e83e149f0" (UID: "bfe5d9aa-13db-4750-b440-dc1e83e149f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.643140 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe5d9aa-13db-4750-b440-dc1e83e149f0-catalog-content\") pod \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\" (UID: \"bfe5d9aa-13db-4750-b440-dc1e83e149f0\") " Nov 27 17:13:22 crc kubenswrapper[4792]: W1127 17:13:22.643273 4792 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bfe5d9aa-13db-4750-b440-dc1e83e149f0/volumes/kubernetes.io~empty-dir/catalog-content Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.643305 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfe5d9aa-13db-4750-b440-dc1e83e149f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfe5d9aa-13db-4750-b440-dc1e83e149f0" (UID: "bfe5d9aa-13db-4750-b440-dc1e83e149f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.643455 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v45k\" (UniqueName: \"kubernetes.io/projected/0aacc646-59f3-41fe-b59b-ce5fed81861f-kube-api-access-9v45k\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.643477 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqz5z\" (UniqueName: \"kubernetes.io/projected/bfe5d9aa-13db-4750-b440-dc1e83e149f0-kube-api-access-lqz5z\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.643491 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe5d9aa-13db-4750-b440-dc1e83e149f0-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.643505 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a53d4e7e-b60e-4c7b-91ce-f025197188d8-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.643516 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aacc646-59f3-41fe-b59b-ce5fed81861f-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.643528 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpmcr\" (UniqueName: \"kubernetes.io/projected/a53d4e7e-b60e-4c7b-91ce-f025197188d8-kube-api-access-vpmcr\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.643541 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe5d9aa-13db-4750-b440-dc1e83e149f0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.643552 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aacc646-59f3-41fe-b59b-ce5fed81861f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.682428 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a53d4e7e-b60e-4c7b-91ce-f025197188d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a53d4e7e-b60e-4c7b-91ce-f025197188d8" (UID: "a53d4e7e-b60e-4c7b-91ce-f025197188d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:13:22 crc kubenswrapper[4792]: I1127 17:13:22.744892 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a53d4e7e-b60e-4c7b-91ce-f025197188d8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.022340 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kt9ct" event={"ID":"a53d4e7e-b60e-4c7b-91ce-f025197188d8","Type":"ContainerDied","Data":"6ed2d65a2c473aa09a37b393ec647158742258912e920cfece4fb39f9a88786e"} Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.022393 4792 scope.go:117] "RemoveContainer" containerID="f0a8404f9fd5632d7bc11ecc03defd033062051f4e6ded536c25c3a13421bbee" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.022496 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kt9ct" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.026816 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" event={"ID":"4cb69df9-1d51-439c-bb3c-c17bd951bde3","Type":"ContainerStarted","Data":"99a33d7d5436732f7677b92371e4afa7a048b6abdd0370eb62ef33a6cbd6b950"} Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.026860 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" event={"ID":"4cb69df9-1d51-439c-bb3c-c17bd951bde3","Type":"ContainerStarted","Data":"7a60a593c6efc9d82ec4028cb4d7b0d79d751c65f5dffa34392652447a82d54b"} Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.027149 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.028760 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" event={"ID":"228abb37-ff66-48b3-a882-d67ca901a322","Type":"ContainerDied","Data":"3cc0bb6d0ba7fe5eb8fe83c6895f7dca21cf2478a8efb3b58fd3cda7594f811f"} Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.028788 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7psgx" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.031054 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8djv" event={"ID":"0aacc646-59f3-41fe-b59b-ce5fed81861f","Type":"ContainerDied","Data":"87337299c9012429c5a44efd4c146e9eba6c188554f2d08da000c0e11a375513"} Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.031102 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8djv" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.034451 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.035352 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8kfjw" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.035817 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqrlt" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.035810 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqrlt" event={"ID":"bfe5d9aa-13db-4750-b440-dc1e83e149f0","Type":"ContainerDied","Data":"d039a5bad769d7ee140ff1b8a23729cd58b67e2380dc1ac67aff5d651a54497c"} Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.045585 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wjl66" podStartSLOduration=2.045471281 podStartE2EDuration="2.045471281s" podCreationTimestamp="2025-11-27 17:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:13:23.040033945 +0000 UTC m=+225.382860273" watchObservedRunningTime="2025-11-27 17:13:23.045471281 +0000 UTC m=+225.388297589" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.046122 4792 scope.go:117] "RemoveContainer" containerID="f14fee108d5749f2fcbe8bc056998babc7bdae82652758f53484a0ab1e4152dd" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.074684 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kt9ct"] Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.074951 4792 scope.go:117] "RemoveContainer" containerID="8c4d84c8dd65530d7bdd30b62be007bdab81e697572b1b1eb6446bac59235a96" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.075815 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kt9ct"] Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.095572 4792 scope.go:117] "RemoveContainer" containerID="1c41c2c2f100456748486f3629080fcf52f3a10e833b71a29c230ec50caa4410" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.106859 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kfjw"] Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.111118 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kfjw"] Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.116844 4792 scope.go:117] "RemoveContainer" containerID="00f2a9869f5cf265615b2f2ea337c72b94a53dc5b0545e62b53d7703fe278825" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.117540 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c8djv"] Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.124307 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c8djv"] Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.128777 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tqrlt"] Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.132134 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tqrlt"] Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.132968 4792 scope.go:117] "RemoveContainer" containerID="3850a05171d77fe4c6e45e46384629bfcf0abae597ad7b85ae0325a8b30e9d3b" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.135052 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7psgx"] Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.137892 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7psgx"] Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.150509 4792 scope.go:117] "RemoveContainer" containerID="a80e8e6344e5c93558954624fb3af3080cd5af0f84b0349038e3bb54ac143e4e" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.162925 4792 scope.go:117] "RemoveContainer" containerID="725ea26916c53d493d00d06a9c17648d20012c1b959548ae06aea0188f7c04f2" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.176282 4792 scope.go:117] "RemoveContainer" containerID="0e5fcc844d4d904f48d63369fb0866a7b676fc782313b2633fcd025632425f6a" Nov 27 17:13:23 crc kubenswrapper[4792]: I1127 17:13:23.190129 4792 scope.go:117] "RemoveContainer" containerID="d3db912fd52e9f6c9fae2e1025e3edeee402374c4fd82196d9047233f8dcff37" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.694579 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aacc646-59f3-41fe-b59b-ce5fed81861f" path="/var/lib/kubelet/pods/0aacc646-59f3-41fe-b59b-ce5fed81861f/volumes" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.697255 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="228abb37-ff66-48b3-a882-d67ca901a322" path="/var/lib/kubelet/pods/228abb37-ff66-48b3-a882-d67ca901a322/volumes" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.697936 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28aedb43-5391-4868-b839-54e2857d62c7" path="/var/lib/kubelet/pods/28aedb43-5391-4868-b839-54e2857d62c7/volumes" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.699796 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a53d4e7e-b60e-4c7b-91ce-f025197188d8" path="/var/lib/kubelet/pods/a53d4e7e-b60e-4c7b-91ce-f025197188d8/volumes" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.701054 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfe5d9aa-13db-4750-b440-dc1e83e149f0" path="/var/lib/kubelet/pods/bfe5d9aa-13db-4750-b440-dc1e83e149f0/volumes" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.985363 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pf2x7"] Nov 27 17:13:24 crc kubenswrapper[4792]: E1127 17:13:24.985737 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe5d9aa-13db-4750-b440-dc1e83e149f0" containerName="registry-server" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.985762 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe5d9aa-13db-4750-b440-dc1e83e149f0" containerName="registry-server" Nov 27 17:13:24 crc kubenswrapper[4792]: E1127 17:13:24.985781 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53d4e7e-b60e-4c7b-91ce-f025197188d8" containerName="extract-utilities" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.985792 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53d4e7e-b60e-4c7b-91ce-f025197188d8" containerName="extract-utilities" Nov 27 17:13:24 crc kubenswrapper[4792]: E1127 17:13:24.985858 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28aedb43-5391-4868-b839-54e2857d62c7" containerName="extract-content" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.985871 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="28aedb43-5391-4868-b839-54e2857d62c7" containerName="extract-content" Nov 27 17:13:24 crc kubenswrapper[4792]: E1127 17:13:24.985887 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28aedb43-5391-4868-b839-54e2857d62c7" containerName="extract-utilities" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.985898 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="28aedb43-5391-4868-b839-54e2857d62c7" containerName="extract-utilities" Nov 27 17:13:24 crc kubenswrapper[4792]: E1127 17:13:24.985912 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aacc646-59f3-41fe-b59b-ce5fed81861f" containerName="extract-content" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.985922 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aacc646-59f3-41fe-b59b-ce5fed81861f" containerName="extract-content" Nov 27 17:13:24 crc kubenswrapper[4792]: E1127 17:13:24.985937 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aacc646-59f3-41fe-b59b-ce5fed81861f" containerName="registry-server" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.985947 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aacc646-59f3-41fe-b59b-ce5fed81861f" containerName="registry-server" Nov 27 17:13:24 crc kubenswrapper[4792]: E1127 17:13:24.985965 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe5d9aa-13db-4750-b440-dc1e83e149f0" containerName="extract-content" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.985975 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe5d9aa-13db-4750-b440-dc1e83e149f0" containerName="extract-content" Nov 27 17:13:24 crc kubenswrapper[4792]: E1127 17:13:24.985990 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53d4e7e-b60e-4c7b-91ce-f025197188d8" containerName="registry-server" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.986001 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53d4e7e-b60e-4c7b-91ce-f025197188d8" containerName="registry-server" Nov 27 17:13:24 crc kubenswrapper[4792]: E1127 17:13:24.986018 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53d4e7e-b60e-4c7b-91ce-f025197188d8" containerName="extract-content" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.986029 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53d4e7e-b60e-4c7b-91ce-f025197188d8" containerName="extract-content" Nov 27 17:13:24 crc kubenswrapper[4792]: E1127 17:13:24.986044 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28aedb43-5391-4868-b839-54e2857d62c7" containerName="registry-server" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.986054 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="28aedb43-5391-4868-b839-54e2857d62c7" containerName="registry-server" Nov 27 17:13:24 crc kubenswrapper[4792]: E1127 17:13:24.986064 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe5d9aa-13db-4750-b440-dc1e83e149f0" containerName="extract-utilities" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.986074 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe5d9aa-13db-4750-b440-dc1e83e149f0" containerName="extract-utilities" Nov 27 17:13:24 crc kubenswrapper[4792]: E1127 17:13:24.986092 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aacc646-59f3-41fe-b59b-ce5fed81861f" containerName="extract-utilities" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.986103 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aacc646-59f3-41fe-b59b-ce5fed81861f" containerName="extract-utilities" Nov 27 17:13:24 crc kubenswrapper[4792]: E1127 17:13:24.986113 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228abb37-ff66-48b3-a882-d67ca901a322" containerName="marketplace-operator" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.986123 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="228abb37-ff66-48b3-a882-d67ca901a322" containerName="marketplace-operator" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.986275 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfe5d9aa-13db-4750-b440-dc1e83e149f0" containerName="registry-server" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.986292 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a53d4e7e-b60e-4c7b-91ce-f025197188d8" containerName="registry-server" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.986356 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aacc646-59f3-41fe-b59b-ce5fed81861f" containerName="registry-server" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.986372 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="228abb37-ff66-48b3-a882-d67ca901a322" containerName="marketplace-operator" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.986385 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="28aedb43-5391-4868-b839-54e2857d62c7" containerName="registry-server" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.987609 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.991711 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 27 17:13:24 crc kubenswrapper[4792]: I1127 17:13:24.994759 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pf2x7"] Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.175270 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb471a97-f4b1-488a-99f2-35df6686cd45-utilities\") pod \"community-operators-pf2x7\" (UID: \"bb471a97-f4b1-488a-99f2-35df6686cd45\") " pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.175389 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bdfk\" (UniqueName: \"kubernetes.io/projected/bb471a97-f4b1-488a-99f2-35df6686cd45-kube-api-access-8bdfk\") pod \"community-operators-pf2x7\" (UID: \"bb471a97-f4b1-488a-99f2-35df6686cd45\") " pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.175443 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb471a97-f4b1-488a-99f2-35df6686cd45-catalog-content\") pod \"community-operators-pf2x7\" (UID: \"bb471a97-f4b1-488a-99f2-35df6686cd45\") " pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.186693 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5rv5f"] Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.189599 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rv5f" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.191394 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.198802 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rv5f"] Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.276361 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6gwf\" (UniqueName: \"kubernetes.io/projected/ac78acde-862f-4924-a4a1-59edc00f6ee5-kube-api-access-j6gwf\") pod \"redhat-marketplace-5rv5f\" (UID: \"ac78acde-862f-4924-a4a1-59edc00f6ee5\") " pod="openshift-marketplace/redhat-marketplace-5rv5f" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.276421 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bdfk\" (UniqueName: \"kubernetes.io/projected/bb471a97-f4b1-488a-99f2-35df6686cd45-kube-api-access-8bdfk\") pod \"community-operators-pf2x7\" (UID: \"bb471a97-f4b1-488a-99f2-35df6686cd45\") " pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.276450 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac78acde-862f-4924-a4a1-59edc00f6ee5-catalog-content\") pod \"redhat-marketplace-5rv5f\" (UID: \"ac78acde-862f-4924-a4a1-59edc00f6ee5\") " pod="openshift-marketplace/redhat-marketplace-5rv5f" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.276479 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb471a97-f4b1-488a-99f2-35df6686cd45-catalog-content\") pod \"community-operators-pf2x7\" (UID: \"bb471a97-f4b1-488a-99f2-35df6686cd45\") " pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.276570 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb471a97-f4b1-488a-99f2-35df6686cd45-utilities\") pod \"community-operators-pf2x7\" (UID: \"bb471a97-f4b1-488a-99f2-35df6686cd45\") " pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.276611 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac78acde-862f-4924-a4a1-59edc00f6ee5-utilities\") pod \"redhat-marketplace-5rv5f\" (UID: \"ac78acde-862f-4924-a4a1-59edc00f6ee5\") " pod="openshift-marketplace/redhat-marketplace-5rv5f" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.277021 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb471a97-f4b1-488a-99f2-35df6686cd45-utilities\") pod \"community-operators-pf2x7\" (UID: \"bb471a97-f4b1-488a-99f2-35df6686cd45\") " pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.277148 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb471a97-f4b1-488a-99f2-35df6686cd45-catalog-content\") pod \"community-operators-pf2x7\" (UID: \"bb471a97-f4b1-488a-99f2-35df6686cd45\") " pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.297427 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bdfk\" (UniqueName: \"kubernetes.io/projected/bb471a97-f4b1-488a-99f2-35df6686cd45-kube-api-access-8bdfk\") pod \"community-operators-pf2x7\" (UID: \"bb471a97-f4b1-488a-99f2-35df6686cd45\") " pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.358534 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.378090 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6gwf\" (UniqueName: \"kubernetes.io/projected/ac78acde-862f-4924-a4a1-59edc00f6ee5-kube-api-access-j6gwf\") pod \"redhat-marketplace-5rv5f\" (UID: \"ac78acde-862f-4924-a4a1-59edc00f6ee5\") " pod="openshift-marketplace/redhat-marketplace-5rv5f" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.378139 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac78acde-862f-4924-a4a1-59edc00f6ee5-catalog-content\") pod \"redhat-marketplace-5rv5f\" (UID: \"ac78acde-862f-4924-a4a1-59edc00f6ee5\") " pod="openshift-marketplace/redhat-marketplace-5rv5f" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.378195 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac78acde-862f-4924-a4a1-59edc00f6ee5-utilities\") pod \"redhat-marketplace-5rv5f\" (UID: \"ac78acde-862f-4924-a4a1-59edc00f6ee5\") " pod="openshift-marketplace/redhat-marketplace-5rv5f" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.378567 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac78acde-862f-4924-a4a1-59edc00f6ee5-utilities\") pod \"redhat-marketplace-5rv5f\" (UID: \"ac78acde-862f-4924-a4a1-59edc00f6ee5\") " pod="openshift-marketplace/redhat-marketplace-5rv5f" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.378969 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac78acde-862f-4924-a4a1-59edc00f6ee5-catalog-content\") pod \"redhat-marketplace-5rv5f\" (UID: \"ac78acde-862f-4924-a4a1-59edc00f6ee5\") " pod="openshift-marketplace/redhat-marketplace-5rv5f" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.396038 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6gwf\" (UniqueName: \"kubernetes.io/projected/ac78acde-862f-4924-a4a1-59edc00f6ee5-kube-api-access-j6gwf\") pod \"redhat-marketplace-5rv5f\" (UID: \"ac78acde-862f-4924-a4a1-59edc00f6ee5\") " pod="openshift-marketplace/redhat-marketplace-5rv5f" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.506410 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rv5f" Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.744439 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pf2x7"] Nov 27 17:13:25 crc kubenswrapper[4792]: W1127 17:13:25.747131 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb471a97_f4b1_488a_99f2_35df6686cd45.slice/crio-8b1bbcc8df2c7149bcb7496363abd0b3168922b92b34a5866ebcf1875875de1f WatchSource:0}: Error finding container 8b1bbcc8df2c7149bcb7496363abd0b3168922b92b34a5866ebcf1875875de1f: Status 404 returned error can't find the container with id 8b1bbcc8df2c7149bcb7496363abd0b3168922b92b34a5866ebcf1875875de1f Nov 27 17:13:25 crc kubenswrapper[4792]: I1127 17:13:25.872617 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rv5f"] Nov 27 17:13:25 crc kubenswrapper[4792]: W1127 17:13:25.878601 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac78acde_862f_4924_a4a1_59edc00f6ee5.slice/crio-147986548bb26cd4e63f2bbbe95022f609b8f4caee325aeac22192046231d5f1 WatchSource:0}: Error finding container 147986548bb26cd4e63f2bbbe95022f609b8f4caee325aeac22192046231d5f1: Status 404 returned error can't find the container with id 147986548bb26cd4e63f2bbbe95022f609b8f4caee325aeac22192046231d5f1 Nov 27 17:13:26 crc kubenswrapper[4792]: I1127 17:13:26.056704 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rv5f" event={"ID":"ac78acde-862f-4924-a4a1-59edc00f6ee5","Type":"ContainerStarted","Data":"147986548bb26cd4e63f2bbbe95022f609b8f4caee325aeac22192046231d5f1"} Nov 27 17:13:26 crc kubenswrapper[4792]: I1127 17:13:26.058800 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf2x7" event={"ID":"bb471a97-f4b1-488a-99f2-35df6686cd45","Type":"ContainerStarted","Data":"8d2716789e1595fbe61a37722ab416c9e5a09c3c7d4d595445c5bca4b4a5179a"} Nov 27 17:13:26 crc kubenswrapper[4792]: I1127 17:13:26.058849 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf2x7" event={"ID":"bb471a97-f4b1-488a-99f2-35df6686cd45","Type":"ContainerStarted","Data":"8b1bbcc8df2c7149bcb7496363abd0b3168922b92b34a5866ebcf1875875de1f"} Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.067684 4792 generic.go:334] "Generic (PLEG): container finished" podID="bb471a97-f4b1-488a-99f2-35df6686cd45" containerID="8d2716789e1595fbe61a37722ab416c9e5a09c3c7d4d595445c5bca4b4a5179a" exitCode=0 Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.067806 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf2x7" event={"ID":"bb471a97-f4b1-488a-99f2-35df6686cd45","Type":"ContainerDied","Data":"8d2716789e1595fbe61a37722ab416c9e5a09c3c7d4d595445c5bca4b4a5179a"} Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.070933 4792 generic.go:334] "Generic (PLEG): container finished" podID="ac78acde-862f-4924-a4a1-59edc00f6ee5" containerID="ebb99787901af8533d6206950caf2451e0503bf7e0d23bf3855d5008f2fd41eb" exitCode=0 Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.071011 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rv5f" event={"ID":"ac78acde-862f-4924-a4a1-59edc00f6ee5","Type":"ContainerDied","Data":"ebb99787901af8533d6206950caf2451e0503bf7e0d23bf3855d5008f2fd41eb"} Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.386453 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4k6w8"] Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.387631 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4k6w8" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.391281 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.397469 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4k6w8"] Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.504701 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16cfc9e0-e90a-438f-9128-6d59f065695e-catalog-content\") pod \"redhat-operators-4k6w8\" (UID: \"16cfc9e0-e90a-438f-9128-6d59f065695e\") " pod="openshift-marketplace/redhat-operators-4k6w8" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.504790 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zn77\" (UniqueName: \"kubernetes.io/projected/16cfc9e0-e90a-438f-9128-6d59f065695e-kube-api-access-4zn77\") pod \"redhat-operators-4k6w8\" (UID: \"16cfc9e0-e90a-438f-9128-6d59f065695e\") " pod="openshift-marketplace/redhat-operators-4k6w8" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.504820 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16cfc9e0-e90a-438f-9128-6d59f065695e-utilities\") pod \"redhat-operators-4k6w8\" (UID: \"16cfc9e0-e90a-438f-9128-6d59f065695e\") " pod="openshift-marketplace/redhat-operators-4k6w8" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.591560 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t2tmt"] Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.592616 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2tmt" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.597159 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.604978 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2tmt"] Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.605489 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16cfc9e0-e90a-438f-9128-6d59f065695e-catalog-content\") pod \"redhat-operators-4k6w8\" (UID: \"16cfc9e0-e90a-438f-9128-6d59f065695e\") " pod="openshift-marketplace/redhat-operators-4k6w8" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.605573 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zn77\" (UniqueName: \"kubernetes.io/projected/16cfc9e0-e90a-438f-9128-6d59f065695e-kube-api-access-4zn77\") pod \"redhat-operators-4k6w8\" (UID: \"16cfc9e0-e90a-438f-9128-6d59f065695e\") " pod="openshift-marketplace/redhat-operators-4k6w8" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.605602 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16cfc9e0-e90a-438f-9128-6d59f065695e-utilities\") pod \"redhat-operators-4k6w8\" (UID: \"16cfc9e0-e90a-438f-9128-6d59f065695e\") " pod="openshift-marketplace/redhat-operators-4k6w8" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.606032 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16cfc9e0-e90a-438f-9128-6d59f065695e-utilities\") pod \"redhat-operators-4k6w8\" (UID: \"16cfc9e0-e90a-438f-9128-6d59f065695e\") " pod="openshift-marketplace/redhat-operators-4k6w8" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.606172 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16cfc9e0-e90a-438f-9128-6d59f065695e-catalog-content\") pod \"redhat-operators-4k6w8\" (UID: \"16cfc9e0-e90a-438f-9128-6d59f065695e\") " pod="openshift-marketplace/redhat-operators-4k6w8" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.636389 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zn77\" (UniqueName: \"kubernetes.io/projected/16cfc9e0-e90a-438f-9128-6d59f065695e-kube-api-access-4zn77\") pod \"redhat-operators-4k6w8\" (UID: \"16cfc9e0-e90a-438f-9128-6d59f065695e\") " pod="openshift-marketplace/redhat-operators-4k6w8" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.707182 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad170db-00fe-471f-b3b3-0201e1b54c21-utilities\") pod \"certified-operators-t2tmt\" (UID: \"fad170db-00fe-471f-b3b3-0201e1b54c21\") " pod="openshift-marketplace/certified-operators-t2tmt" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.707224 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbqp\" (UniqueName: \"kubernetes.io/projected/fad170db-00fe-471f-b3b3-0201e1b54c21-kube-api-access-brbqp\") pod \"certified-operators-t2tmt\" (UID: \"fad170db-00fe-471f-b3b3-0201e1b54c21\") " pod="openshift-marketplace/certified-operators-t2tmt" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.707256 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad170db-00fe-471f-b3b3-0201e1b54c21-catalog-content\") pod \"certified-operators-t2tmt\" (UID: \"fad170db-00fe-471f-b3b3-0201e1b54c21\") " pod="openshift-marketplace/certified-operators-t2tmt" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.716285 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4k6w8" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.808796 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad170db-00fe-471f-b3b3-0201e1b54c21-utilities\") pod \"certified-operators-t2tmt\" (UID: \"fad170db-00fe-471f-b3b3-0201e1b54c21\") " pod="openshift-marketplace/certified-operators-t2tmt" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.808846 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brbqp\" (UniqueName: \"kubernetes.io/projected/fad170db-00fe-471f-b3b3-0201e1b54c21-kube-api-access-brbqp\") pod \"certified-operators-t2tmt\" (UID: \"fad170db-00fe-471f-b3b3-0201e1b54c21\") " pod="openshift-marketplace/certified-operators-t2tmt" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.808887 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad170db-00fe-471f-b3b3-0201e1b54c21-catalog-content\") pod \"certified-operators-t2tmt\" (UID: \"fad170db-00fe-471f-b3b3-0201e1b54c21\") " pod="openshift-marketplace/certified-operators-t2tmt" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.810138 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fad170db-00fe-471f-b3b3-0201e1b54c21-utilities\") pod \"certified-operators-t2tmt\" (UID: \"fad170db-00fe-471f-b3b3-0201e1b54c21\") " pod="openshift-marketplace/certified-operators-t2tmt" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.810381 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fad170db-00fe-471f-b3b3-0201e1b54c21-catalog-content\") pod \"certified-operators-t2tmt\" (UID: \"fad170db-00fe-471f-b3b3-0201e1b54c21\") " pod="openshift-marketplace/certified-operators-t2tmt" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.829988 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brbqp\" (UniqueName: \"kubernetes.io/projected/fad170db-00fe-471f-b3b3-0201e1b54c21-kube-api-access-brbqp\") pod \"certified-operators-t2tmt\" (UID: \"fad170db-00fe-471f-b3b3-0201e1b54c21\") " pod="openshift-marketplace/certified-operators-t2tmt" Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.878722 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4k6w8"] Nov 27 17:13:27 crc kubenswrapper[4792]: I1127 17:13:27.909966 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2tmt" Nov 27 17:13:28 crc kubenswrapper[4792]: I1127 17:13:28.084043 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4k6w8" event={"ID":"16cfc9e0-e90a-438f-9128-6d59f065695e","Type":"ContainerStarted","Data":"388f70bf9c59e407768c6b4231354f6e6f84baf43e83e6368584978ecc04fd42"} Nov 27 17:13:28 crc kubenswrapper[4792]: I1127 17:13:28.084338 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4k6w8" event={"ID":"16cfc9e0-e90a-438f-9128-6d59f065695e","Type":"ContainerStarted","Data":"5a41b98d17bd47b0e08831f5146021827cc8013ea4c95fe2a2d39b3be2b3c81e"} Nov 27 17:13:28 crc kubenswrapper[4792]: I1127 17:13:28.118768 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2tmt"] Nov 27 17:13:29 crc kubenswrapper[4792]: I1127 17:13:29.089845 4792 generic.go:334] "Generic (PLEG): container finished" podID="bb471a97-f4b1-488a-99f2-35df6686cd45" containerID="9c3a9f331d51efc4dab4b100dabefbcfbda08abb48144bebf3a8d5aa6b030b3a" exitCode=0 Nov 27 17:13:29 crc kubenswrapper[4792]: I1127 17:13:29.090198 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf2x7" event={"ID":"bb471a97-f4b1-488a-99f2-35df6686cd45","Type":"ContainerDied","Data":"9c3a9f331d51efc4dab4b100dabefbcfbda08abb48144bebf3a8d5aa6b030b3a"} Nov 27 17:13:29 crc kubenswrapper[4792]: I1127 17:13:29.092352 4792 generic.go:334] "Generic (PLEG): container finished" podID="16cfc9e0-e90a-438f-9128-6d59f065695e" containerID="388f70bf9c59e407768c6b4231354f6e6f84baf43e83e6368584978ecc04fd42" exitCode=0 Nov 27 17:13:29 crc kubenswrapper[4792]: I1127 17:13:29.092405 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4k6w8" event={"ID":"16cfc9e0-e90a-438f-9128-6d59f065695e","Type":"ContainerDied","Data":"388f70bf9c59e407768c6b4231354f6e6f84baf43e83e6368584978ecc04fd42"} Nov 27 17:13:29 crc kubenswrapper[4792]: I1127 17:13:29.093794 4792 generic.go:334] "Generic (PLEG): container finished" podID="fad170db-00fe-471f-b3b3-0201e1b54c21" containerID="fdef339910019d10fa51a4d188a98f0451c2371d2205b923b6c5a38d29b39eaf" exitCode=0 Nov 27 17:13:29 crc kubenswrapper[4792]: I1127 17:13:29.093828 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2tmt" event={"ID":"fad170db-00fe-471f-b3b3-0201e1b54c21","Type":"ContainerDied","Data":"fdef339910019d10fa51a4d188a98f0451c2371d2205b923b6c5a38d29b39eaf"} Nov 27 17:13:29 crc kubenswrapper[4792]: I1127 17:13:29.093842 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2tmt" event={"ID":"fad170db-00fe-471f-b3b3-0201e1b54c21","Type":"ContainerStarted","Data":"1e81267242939e5fbff73b1a6fb0453d2ac27f71728b2c3b7cd88b1e201c9d05"} Nov 27 17:13:29 crc kubenswrapper[4792]: I1127 17:13:29.095950 4792 generic.go:334] "Generic (PLEG): container finished" podID="ac78acde-862f-4924-a4a1-59edc00f6ee5" containerID="104ab913529e3a87f7602a7419b12a5c8e19b867a5bd1a33400f37c3c3b2e036" exitCode=0 Nov 27 17:13:29 crc kubenswrapper[4792]: I1127 17:13:29.095989 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rv5f" event={"ID":"ac78acde-862f-4924-a4a1-59edc00f6ee5","Type":"ContainerDied","Data":"104ab913529e3a87f7602a7419b12a5c8e19b867a5bd1a33400f37c3c3b2e036"} Nov 27 17:13:30 crc kubenswrapper[4792]: I1127 17:13:30.103978 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rv5f" event={"ID":"ac78acde-862f-4924-a4a1-59edc00f6ee5","Type":"ContainerStarted","Data":"4d0ad765dc49203af66c13b586598f81c068e4cf22c9a7157a2b2ec446d3b232"} Nov 27 17:13:30 crc kubenswrapper[4792]: I1127 17:13:30.121411 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5rv5f" podStartSLOduration=2.455990924 podStartE2EDuration="5.121397054s" podCreationTimestamp="2025-11-27 17:13:25 +0000 UTC" firstStartedPulling="2025-11-27 17:13:27.073728165 +0000 UTC m=+229.416554503" lastFinishedPulling="2025-11-27 17:13:29.739134315 +0000 UTC m=+232.081960633" observedRunningTime="2025-11-27 17:13:30.119497136 +0000 UTC m=+232.462323464" watchObservedRunningTime="2025-11-27 17:13:30.121397054 +0000 UTC m=+232.464223372" Nov 27 17:13:31 crc kubenswrapper[4792]: I1127 17:13:31.111882 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2tmt" event={"ID":"fad170db-00fe-471f-b3b3-0201e1b54c21","Type":"ContainerStarted","Data":"df4897f7425834df035ea1c687957ae0ae1d9695c3379e28b361adc6399eb3df"} Nov 27 17:13:31 crc kubenswrapper[4792]: I1127 17:13:31.113845 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf2x7" event={"ID":"bb471a97-f4b1-488a-99f2-35df6686cd45","Type":"ContainerStarted","Data":"177cfbab5f4cb3958bd6db61ad3b86aa2bc76ba2d644dbae236dd75c7efad7d0"} Nov 27 17:13:31 crc kubenswrapper[4792]: I1127 17:13:31.115867 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4k6w8" event={"ID":"16cfc9e0-e90a-438f-9128-6d59f065695e","Type":"ContainerStarted","Data":"c573a3b7c7d0b8836de0fb949cbb64be0e795f1a79c5f2b3936bf719531f943d"} Nov 27 17:13:31 crc kubenswrapper[4792]: I1127 17:13:31.158110 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pf2x7" podStartSLOduration=3.920233146 podStartE2EDuration="7.158087575s" podCreationTimestamp="2025-11-27 17:13:24 +0000 UTC" firstStartedPulling="2025-11-27 17:13:27.070808616 +0000 UTC m=+229.413634924" lastFinishedPulling="2025-11-27 17:13:30.308663015 +0000 UTC m=+232.651489353" observedRunningTime="2025-11-27 17:13:31.156369912 +0000 UTC m=+233.499196250" watchObservedRunningTime="2025-11-27 17:13:31.158087575 +0000 UTC m=+233.500913893" Nov 27 17:13:32 crc kubenswrapper[4792]: I1127 17:13:32.121422 4792 generic.go:334] "Generic (PLEG): container finished" podID="16cfc9e0-e90a-438f-9128-6d59f065695e" containerID="c573a3b7c7d0b8836de0fb949cbb64be0e795f1a79c5f2b3936bf719531f943d" exitCode=0 Nov 27 17:13:32 crc kubenswrapper[4792]: I1127 17:13:32.121460 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4k6w8" event={"ID":"16cfc9e0-e90a-438f-9128-6d59f065695e","Type":"ContainerDied","Data":"c573a3b7c7d0b8836de0fb949cbb64be0e795f1a79c5f2b3936bf719531f943d"} Nov 27 17:13:32 crc kubenswrapper[4792]: I1127 17:13:32.123130 4792 generic.go:334] "Generic (PLEG): container finished" podID="fad170db-00fe-471f-b3b3-0201e1b54c21" containerID="df4897f7425834df035ea1c687957ae0ae1d9695c3379e28b361adc6399eb3df" exitCode=0 Nov 27 17:13:32 crc kubenswrapper[4792]: I1127 17:13:32.123394 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2tmt" event={"ID":"fad170db-00fe-471f-b3b3-0201e1b54c21","Type":"ContainerDied","Data":"df4897f7425834df035ea1c687957ae0ae1d9695c3379e28b361adc6399eb3df"} Nov 27 17:13:33 crc kubenswrapper[4792]: I1127 17:13:33.133928 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2tmt" event={"ID":"fad170db-00fe-471f-b3b3-0201e1b54c21","Type":"ContainerStarted","Data":"51e3fdbf6c1469dbf74d9ae9c81f5a92c51430262773ab4ebe571dceaeb18f31"} Nov 27 17:13:33 crc kubenswrapper[4792]: I1127 17:13:33.152994 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t2tmt" podStartSLOduration=2.45156113 podStartE2EDuration="6.15297283s" podCreationTimestamp="2025-11-27 17:13:27 +0000 UTC" firstStartedPulling="2025-11-27 17:13:29.094884743 +0000 UTC m=+231.437711051" lastFinishedPulling="2025-11-27 17:13:32.796296423 +0000 UTC m=+235.139122751" observedRunningTime="2025-11-27 17:13:33.151357091 +0000 UTC m=+235.494183409" watchObservedRunningTime="2025-11-27 17:13:33.15297283 +0000 UTC m=+235.495799148" Nov 27 17:13:34 crc kubenswrapper[4792]: I1127 17:13:34.141749 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4k6w8" event={"ID":"16cfc9e0-e90a-438f-9128-6d59f065695e","Type":"ContainerStarted","Data":"b99e934e5c3795c51eb75ae2a65b8d049a27d6cd8f1c7e597b2dabb8fb3939ae"} Nov 27 17:13:35 crc kubenswrapper[4792]: I1127 17:13:35.359746 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:13:35 crc kubenswrapper[4792]: I1127 17:13:35.360690 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:13:35 crc kubenswrapper[4792]: I1127 17:13:35.414491 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:13:35 crc kubenswrapper[4792]: I1127 17:13:35.444628 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4k6w8" podStartSLOduration=4.461779162 podStartE2EDuration="8.44460647s" podCreationTimestamp="2025-11-27 17:13:27 +0000 UTC" firstStartedPulling="2025-11-27 17:13:29.094372257 +0000 UTC m=+231.437198575" lastFinishedPulling="2025-11-27 17:13:33.077199555 +0000 UTC m=+235.420025883" observedRunningTime="2025-11-27 17:13:34.168606957 +0000 UTC m=+236.511433275" watchObservedRunningTime="2025-11-27 17:13:35.44460647 +0000 UTC m=+237.787432818" Nov 27 17:13:35 crc kubenswrapper[4792]: I1127 17:13:35.507072 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5rv5f" Nov 27 17:13:35 crc kubenswrapper[4792]: I1127 17:13:35.507523 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5rv5f" Nov 27 17:13:35 crc kubenswrapper[4792]: I1127 17:13:35.546773 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5rv5f" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.188825 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.192068 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5rv5f" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.779904 4792 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.780664 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.783343 4792 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.783372 4792 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 27 17:13:36 crc kubenswrapper[4792]: E1127 17:13:36.783485 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.783495 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 27 17:13:36 crc kubenswrapper[4792]: E1127 17:13:36.783509 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.783514 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 27 17:13:36 crc kubenswrapper[4792]: E1127 17:13:36.783521 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.783526 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 27 17:13:36 crc kubenswrapper[4792]: E1127 17:13:36.783537 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.783543 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 27 17:13:36 crc kubenswrapper[4792]: E1127 17:13:36.783550 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.783555 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 27 17:13:36 crc kubenswrapper[4792]: E1127 17:13:36.783562 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.783568 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.783660 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.783668 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.783677 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.783685 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.783694 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 27 17:13:36 crc kubenswrapper[4792]: E1127 17:13:36.783778 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.783785 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.783864 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.784797 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33" gracePeriod=15 Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.784918 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9" gracePeriod=15 Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.784956 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2" gracePeriod=15 Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.784983 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99" gracePeriod=15 Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.785011 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c" gracePeriod=15 Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.822693 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.916671 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.916725 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.916757 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.916786 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.916815 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.916858 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.916878 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:36 crc kubenswrapper[4792]: I1127 17:13:36.916903 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.018526 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.018573 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.018605 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.018659 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.018695 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.018693 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.018693 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.018732 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.018694 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.018783 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.018798 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.018836 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.018889 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.018932 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.018961 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.019086 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.123453 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:13:37 crc kubenswrapper[4792]: W1127 17:13:37.148504 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-4af481559e544d3edc327e1aedc8f3c2aceaead7cc2ef8a761eb7b2c71ac9c62 WatchSource:0}: Error finding container 4af481559e544d3edc327e1aedc8f3c2aceaead7cc2ef8a761eb7b2c71ac9c62: Status 404 returned error can't find the container with id 4af481559e544d3edc327e1aedc8f3c2aceaead7cc2ef8a761eb7b2c71ac9c62 Nov 27 17:13:37 crc kubenswrapper[4792]: E1127 17:13:37.151441 4792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.214:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187bec698e67fc95 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-27 17:13:37.150581909 +0000 UTC m=+239.493408227,LastTimestamp:2025-11-27 17:13:37.150581909 +0000 UTC m=+239.493408227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.160726 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4af481559e544d3edc327e1aedc8f3c2aceaead7cc2ef8a761eb7b2c71ac9c62"} Nov 27 17:13:37 crc kubenswrapper[4792]: E1127 17:13:37.714235 4792 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:37 crc kubenswrapper[4792]: E1127 17:13:37.714899 4792 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:37 crc kubenswrapper[4792]: E1127 17:13:37.715137 4792 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:37 crc kubenswrapper[4792]: E1127 17:13:37.715378 4792 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:37 crc kubenswrapper[4792]: E1127 17:13:37.715594 4792 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.715625 4792 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 27 17:13:37 crc kubenswrapper[4792]: E1127 17:13:37.715885 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="200ms" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.716401 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4k6w8" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.716440 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4k6w8" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.910478 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t2tmt" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.910544 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t2tmt" Nov 27 17:13:37 crc kubenswrapper[4792]: E1127 17:13:37.916410 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="400ms" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.953684 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t2tmt" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.954378 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:37 crc kubenswrapper[4792]: I1127 17:13:37.954801 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.170219 4792 generic.go:334] "Generic (PLEG): container finished" podID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" containerID="deaa42c03a8fa19bc786a9bf42cb3dfe0a9aae6ed870d91e56913b36762d15d8" exitCode=0 Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.170323 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0","Type":"ContainerDied","Data":"deaa42c03a8fa19bc786a9bf42cb3dfe0a9aae6ed870d91e56913b36762d15d8"} Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.171031 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.171411 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.171697 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.172377 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3e2aa632f9ef5fea629b1c21da4efd0da68b9964244922390261a06e590bf80d"} Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.172913 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.173154 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.173386 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.174396 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.175614 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.176344 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9" exitCode=0 Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.176372 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2" exitCode=0 Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.176382 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99" exitCode=0 Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.176399 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c" exitCode=2 Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.176399 4792 scope.go:117] "RemoveContainer" containerID="503469dc6353bafafd698dae02bc2379377785f46400bbf1168981b2caa54d40" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.218223 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t2tmt" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.218749 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.218935 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.219213 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:38 crc kubenswrapper[4792]: E1127 17:13:38.318245 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="800ms" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.691316 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.692803 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.693389 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:38 crc kubenswrapper[4792]: I1127 17:13:38.755852 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4k6w8" podUID="16cfc9e0-e90a-438f-9128-6d59f065695e" containerName="registry-server" probeResult="failure" output=< Nov 27 17:13:38 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 17:13:38 crc kubenswrapper[4792]: > Nov 27 17:13:39 crc kubenswrapper[4792]: E1127 17:13:39.054164 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:13:39Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:13:39Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:13:39Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-27T17:13:39Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:39 crc kubenswrapper[4792]: E1127 17:13:39.054816 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:39 crc kubenswrapper[4792]: E1127 17:13:39.055537 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:39 crc kubenswrapper[4792]: E1127 17:13:39.055971 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:39 crc kubenswrapper[4792]: E1127 17:13:39.056376 4792 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:39 crc kubenswrapper[4792]: E1127 17:13:39.056404 4792 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 27 17:13:39 crc kubenswrapper[4792]: E1127 17:13:39.128481 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="1.6s" Nov 27 17:13:39 crc kubenswrapper[4792]: I1127 17:13:39.187932 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 27 17:13:39 crc kubenswrapper[4792]: I1127 17:13:39.424802 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 27 17:13:39 crc kubenswrapper[4792]: I1127 17:13:39.425389 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:39 crc kubenswrapper[4792]: I1127 17:13:39.425847 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:39 crc kubenswrapper[4792]: I1127 17:13:39.426228 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:39 crc kubenswrapper[4792]: I1127 17:13:39.556833 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-kube-api-access\") pod \"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0\" (UID: \"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0\") " Nov 27 17:13:39 crc kubenswrapper[4792]: I1127 17:13:39.556933 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-kubelet-dir\") pod \"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0\" (UID: \"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0\") " Nov 27 17:13:39 crc kubenswrapper[4792]: I1127 17:13:39.557014 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-var-lock\") pod \"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0\" (UID: \"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0\") " Nov 27 17:13:39 crc kubenswrapper[4792]: I1127 17:13:39.557014 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" (UID: "3bf54b70-6a9d-43cf-829a-35d0abe3bcc0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:13:39 crc kubenswrapper[4792]: I1127 17:13:39.557153 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-var-lock" (OuterVolumeSpecName: "var-lock") pod "3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" (UID: "3bf54b70-6a9d-43cf-829a-35d0abe3bcc0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:13:39 crc kubenswrapper[4792]: I1127 17:13:39.557329 4792 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:39 crc kubenswrapper[4792]: I1127 17:13:39.557351 4792 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-var-lock\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:39 crc kubenswrapper[4792]: I1127 17:13:39.564185 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" (UID: "3bf54b70-6a9d-43cf-829a-35d0abe3bcc0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:13:39 crc kubenswrapper[4792]: I1127 17:13:39.658695 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bf54b70-6a9d-43cf-829a-35d0abe3bcc0-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.195051 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.195572 4792 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33" exitCode=0 Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.197152 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3bf54b70-6a9d-43cf-829a-35d0abe3bcc0","Type":"ContainerDied","Data":"fad0b0207be32f756a299791a2130c90c4d573635cdb3dae8f9bb98782a97de0"} Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.197188 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fad0b0207be32f756a299791a2130c90c4d573635cdb3dae8f9bb98782a97de0" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.197235 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.209378 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.209858 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.210474 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.521175 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.522380 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.522882 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.523304 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.523604 4792 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.523838 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.670619 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.670718 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.670718 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.670822 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.670973 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.670748 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.671514 4792 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.671546 4792 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.671561 4792 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:40 crc kubenswrapper[4792]: I1127 17:13:40.697069 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 27 17:13:40 crc kubenswrapper[4792]: E1127 17:13:40.729709 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="3.2s" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.206814 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.208537 4792 scope.go:117] "RemoveContainer" containerID="74c6222ff2907561b403303b4a7e5db5783f3206d0810d3c8bcf6a6531471aa9" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.208747 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.209580 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.210212 4792 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.210640 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.211201 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.213413 4792 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.213878 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.214199 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.214386 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.223412 4792 scope.go:117] "RemoveContainer" containerID="ff1e75d1954a80c46d34dc300631152ee2cc056f408a2d7df7a485a0c0cca6b2" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.234292 4792 scope.go:117] "RemoveContainer" containerID="f475caa228ed80e15c08c020370de8d4052494d44b62e190eeb0a69f1643fc99" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.251046 4792 scope.go:117] "RemoveContainer" containerID="41fe0474494d009b6b1c38f02688ad08cac9796bc80230d37ef0ecd1e5a4be0c" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.268389 4792 scope.go:117] "RemoveContainer" containerID="34c314e7e6d0f8ea97516ae66eec724c165c9051770b775be143b197040fdc33" Nov 27 17:13:41 crc kubenswrapper[4792]: I1127 17:13:41.288822 4792 scope.go:117] "RemoveContainer" containerID="2fe0ff0166c2238acb8b63a6d2e51a2effc114e23dec0b824ad25dd6a7fa74ea" Nov 27 17:13:43 crc kubenswrapper[4792]: E1127 17:13:43.391004 4792 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.214:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187bec698e67fc95 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-27 17:13:37.150581909 +0000 UTC m=+239.493408227,LastTimestamp:2025-11-27 17:13:37.150581909 +0000 UTC m=+239.493408227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 27 17:13:43 crc kubenswrapper[4792]: I1127 17:13:43.638345 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" podUID="8412b381-cbf1-4f9c-8e93-6991812b725d" containerName="oauth-openshift" containerID="cri-o://d796ecb60e246a664a762fa8c47df0e308df6b112f8c1c0546c1d21133221218" gracePeriod=15 Nov 27 17:13:43 crc kubenswrapper[4792]: E1127 17:13:43.930617 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="6.4s" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.243063 4792 generic.go:334] "Generic (PLEG): container finished" podID="8412b381-cbf1-4f9c-8e93-6991812b725d" containerID="d796ecb60e246a664a762fa8c47df0e308df6b112f8c1c0546c1d21133221218" exitCode=0 Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.243154 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" event={"ID":"8412b381-cbf1-4f9c-8e93-6991812b725d","Type":"ContainerDied","Data":"d796ecb60e246a664a762fa8c47df0e308df6b112f8c1c0546c1d21133221218"} Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.700411 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.700965 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.701228 4792 status_manager.go:851] "Failed to get status for pod" podUID="8412b381-cbf1-4f9c-8e93-6991812b725d" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jz8bh\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.701462 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.701701 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.758755 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4k6w8" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.759537 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.759901 4792 status_manager.go:851] "Failed to get status for pod" podUID="16cfc9e0-e90a-438f-9128-6d59f065695e" pod="openshift-marketplace/redhat-operators-4k6w8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4k6w8\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.760154 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.760523 4792 status_manager.go:851] "Failed to get status for pod" podUID="8412b381-cbf1-4f9c-8e93-6991812b725d" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jz8bh\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.760881 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.796102 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4k6w8" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.796706 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.797130 4792 status_manager.go:851] "Failed to get status for pod" podUID="8412b381-cbf1-4f9c-8e93-6991812b725d" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jz8bh\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.797677 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.797918 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.798223 4792 status_manager.go:851] "Failed to get status for pod" podUID="16cfc9e0-e90a-438f-9128-6d59f065695e" pod="openshift-marketplace/redhat-operators-4k6w8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4k6w8\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.858264 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-cliconfig\") pod \"8412b381-cbf1-4f9c-8e93-6991812b725d\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.858502 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-router-certs\") pod \"8412b381-cbf1-4f9c-8e93-6991812b725d\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.858544 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-login\") pod \"8412b381-cbf1-4f9c-8e93-6991812b725d\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.858587 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-trusted-ca-bundle\") pod \"8412b381-cbf1-4f9c-8e93-6991812b725d\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.858608 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-service-ca\") pod \"8412b381-cbf1-4f9c-8e93-6991812b725d\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.858679 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-idp-0-file-data\") pod \"8412b381-cbf1-4f9c-8e93-6991812b725d\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.858715 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8412b381-cbf1-4f9c-8e93-6991812b725d-audit-dir\") pod \"8412b381-cbf1-4f9c-8e93-6991812b725d\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.858764 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-serving-cert\") pod \"8412b381-cbf1-4f9c-8e93-6991812b725d\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.858788 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-ocp-branding-template\") pod \"8412b381-cbf1-4f9c-8e93-6991812b725d\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.858814 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-error\") pod \"8412b381-cbf1-4f9c-8e93-6991812b725d\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.858837 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-provider-selection\") pod \"8412b381-cbf1-4f9c-8e93-6991812b725d\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.858886 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt2dl\" (UniqueName: \"kubernetes.io/projected/8412b381-cbf1-4f9c-8e93-6991812b725d-kube-api-access-wt2dl\") pod \"8412b381-cbf1-4f9c-8e93-6991812b725d\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.858912 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-audit-policies\") pod \"8412b381-cbf1-4f9c-8e93-6991812b725d\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.858937 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-session\") pod \"8412b381-cbf1-4f9c-8e93-6991812b725d\" (UID: \"8412b381-cbf1-4f9c-8e93-6991812b725d\") " Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.858952 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8412b381-cbf1-4f9c-8e93-6991812b725d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "8412b381-cbf1-4f9c-8e93-6991812b725d" (UID: "8412b381-cbf1-4f9c-8e93-6991812b725d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.859213 4792 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8412b381-cbf1-4f9c-8e93-6991812b725d-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.859625 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "8412b381-cbf1-4f9c-8e93-6991812b725d" (UID: "8412b381-cbf1-4f9c-8e93-6991812b725d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.859688 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "8412b381-cbf1-4f9c-8e93-6991812b725d" (UID: "8412b381-cbf1-4f9c-8e93-6991812b725d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.860053 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "8412b381-cbf1-4f9c-8e93-6991812b725d" (UID: "8412b381-cbf1-4f9c-8e93-6991812b725d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.860329 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "8412b381-cbf1-4f9c-8e93-6991812b725d" (UID: "8412b381-cbf1-4f9c-8e93-6991812b725d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.865263 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "8412b381-cbf1-4f9c-8e93-6991812b725d" (UID: "8412b381-cbf1-4f9c-8e93-6991812b725d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.865466 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "8412b381-cbf1-4f9c-8e93-6991812b725d" (UID: "8412b381-cbf1-4f9c-8e93-6991812b725d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.865963 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "8412b381-cbf1-4f9c-8e93-6991812b725d" (UID: "8412b381-cbf1-4f9c-8e93-6991812b725d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.866415 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "8412b381-cbf1-4f9c-8e93-6991812b725d" (UID: "8412b381-cbf1-4f9c-8e93-6991812b725d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.866882 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8412b381-cbf1-4f9c-8e93-6991812b725d-kube-api-access-wt2dl" (OuterVolumeSpecName: "kube-api-access-wt2dl") pod "8412b381-cbf1-4f9c-8e93-6991812b725d" (UID: "8412b381-cbf1-4f9c-8e93-6991812b725d"). InnerVolumeSpecName "kube-api-access-wt2dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.866940 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "8412b381-cbf1-4f9c-8e93-6991812b725d" (UID: "8412b381-cbf1-4f9c-8e93-6991812b725d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.867600 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "8412b381-cbf1-4f9c-8e93-6991812b725d" (UID: "8412b381-cbf1-4f9c-8e93-6991812b725d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.867876 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "8412b381-cbf1-4f9c-8e93-6991812b725d" (UID: "8412b381-cbf1-4f9c-8e93-6991812b725d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.868010 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "8412b381-cbf1-4f9c-8e93-6991812b725d" (UID: "8412b381-cbf1-4f9c-8e93-6991812b725d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.960883 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.960938 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.960963 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.961019 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.961044 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt2dl\" (UniqueName: \"kubernetes.io/projected/8412b381-cbf1-4f9c-8e93-6991812b725d-kube-api-access-wt2dl\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.961069 4792 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.961089 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.961109 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.961135 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.961156 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.961178 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.961199 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:47 crc kubenswrapper[4792]: I1127 17:13:47.961220 4792 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8412b381-cbf1-4f9c-8e93-6991812b725d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.251251 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.251304 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" event={"ID":"8412b381-cbf1-4f9c-8e93-6991812b725d","Type":"ContainerDied","Data":"a5c41ce3754aa09f937a3f1ae86a9e0530519e3fb155695a4d5db1bb688c0f73"} Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.251344 4792 scope.go:117] "RemoveContainer" containerID="d796ecb60e246a664a762fa8c47df0e308df6b112f8c1c0546c1d21133221218" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.252202 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.252516 4792 status_manager.go:851] "Failed to get status for pod" podUID="8412b381-cbf1-4f9c-8e93-6991812b725d" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jz8bh\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.253051 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.253491 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.254268 4792 status_manager.go:851] "Failed to get status for pod" podUID="16cfc9e0-e90a-438f-9128-6d59f065695e" pod="openshift-marketplace/redhat-operators-4k6w8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4k6w8\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.269369 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.269964 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.270461 4792 status_manager.go:851] "Failed to get status for pod" podUID="8412b381-cbf1-4f9c-8e93-6991812b725d" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jz8bh\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.270822 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.271113 4792 status_manager.go:851] "Failed to get status for pod" podUID="16cfc9e0-e90a-438f-9128-6d59f065695e" pod="openshift-marketplace/redhat-operators-4k6w8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4k6w8\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.689271 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.689507 4792 status_manager.go:851] "Failed to get status for pod" podUID="16cfc9e0-e90a-438f-9128-6d59f065695e" pod="openshift-marketplace/redhat-operators-4k6w8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4k6w8\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.689859 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.690251 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:48 crc kubenswrapper[4792]: I1127 17:13:48.690431 4792 status_manager.go:851] "Failed to get status for pod" podUID="8412b381-cbf1-4f9c-8e93-6991812b725d" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jz8bh\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:50 crc kubenswrapper[4792]: E1127 17:13:50.331797 4792 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="7s" Nov 27 17:13:51 crc kubenswrapper[4792]: I1127 17:13:51.531021 4792 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 27 17:13:51 crc kubenswrapper[4792]: I1127 17:13:51.531103 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 27 17:13:51 crc kubenswrapper[4792]: I1127 17:13:51.685875 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:51 crc kubenswrapper[4792]: I1127 17:13:51.686771 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:51 crc kubenswrapper[4792]: I1127 17:13:51.686953 4792 status_manager.go:851] "Failed to get status for pod" podUID="8412b381-cbf1-4f9c-8e93-6991812b725d" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jz8bh\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:51 crc kubenswrapper[4792]: I1127 17:13:51.687121 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:51 crc kubenswrapper[4792]: I1127 17:13:51.687271 4792 status_manager.go:851] "Failed to get status for pod" podUID="16cfc9e0-e90a-438f-9128-6d59f065695e" pod="openshift-marketplace/redhat-operators-4k6w8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4k6w8\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:51 crc kubenswrapper[4792]: I1127 17:13:51.687484 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:51 crc kubenswrapper[4792]: I1127 17:13:51.700299 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="416187e8-58ba-45d1-972c-5c2fea1afd90" Nov 27 17:13:51 crc kubenswrapper[4792]: I1127 17:13:51.700405 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="416187e8-58ba-45d1-972c-5c2fea1afd90" Nov 27 17:13:51 crc kubenswrapper[4792]: E1127 17:13:51.701024 4792 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:51 crc kubenswrapper[4792]: I1127 17:13:51.701721 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.283670 4792 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="22cf55b82f6b4e0d3b48789472864c5df49ea4b38c1aaf36b64260b0cfc89e0b" exitCode=0 Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.283798 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"22cf55b82f6b4e0d3b48789472864c5df49ea4b38c1aaf36b64260b0cfc89e0b"} Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.284068 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a22e2cdf2bb2722ed6132c53c6f59080a35f9dae603791e2a90ac075ac812e87"} Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.284393 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="416187e8-58ba-45d1-972c-5c2fea1afd90" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.284410 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="416187e8-58ba-45d1-972c-5c2fea1afd90" Nov 27 17:13:52 crc kubenswrapper[4792]: E1127 17:13:52.284954 4792 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.285277 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.285830 4792 status_manager.go:851] "Failed to get status for pod" podUID="8412b381-cbf1-4f9c-8e93-6991812b725d" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jz8bh\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.286243 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.286626 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.286980 4792 status_manager.go:851] "Failed to get status for pod" podUID="16cfc9e0-e90a-438f-9128-6d59f065695e" pod="openshift-marketplace/redhat-operators-4k6w8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4k6w8\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.289415 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.289467 4792 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd" exitCode=1 Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.289502 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd"} Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.290068 4792 scope.go:117] "RemoveContainer" containerID="9a93b2d439bc934546e17b4aab1a486d15a92192e683445df1f068a32c4068dd" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.290235 4792 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.290583 4792 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.290972 4792 status_manager.go:851] "Failed to get status for pod" podUID="8412b381-cbf1-4f9c-8e93-6991812b725d" pod="openshift-authentication/oauth-openshift-558db77b4-jz8bh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jz8bh\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.291404 4792 status_manager.go:851] "Failed to get status for pod" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.292276 4792 status_manager.go:851] "Failed to get status for pod" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" pod="openshift-marketplace/certified-operators-t2tmt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t2tmt\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.292775 4792 status_manager.go:851] "Failed to get status for pod" podUID="16cfc9e0-e90a-438f-9128-6d59f065695e" pod="openshift-marketplace/redhat-operators-4k6w8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-4k6w8\": dial tcp 38.102.83.214:6443: connect: connection refused" Nov 27 17:13:52 crc kubenswrapper[4792]: I1127 17:13:52.400046 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:13:53 crc kubenswrapper[4792]: I1127 17:13:53.294721 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5c6c67ce37e707d30c6792e43e272fe86fe7c39c031b59a7c9b6a1ccfada10ad"} Nov 27 17:13:53 crc kubenswrapper[4792]: I1127 17:13:53.294986 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d13d22c4e004835f6ba7340dbd26f8ea7587a8b42ad8883ddf1caaae669ecea8"} Nov 27 17:13:53 crc kubenswrapper[4792]: I1127 17:13:53.296923 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 27 17:13:53 crc kubenswrapper[4792]: I1127 17:13:53.296949 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"de036c9ad6f2c8ee9c74ad40d0ebaba4b6f816bbbe1b2d089f7dafe106f114db"} Nov 27 17:13:54 crc kubenswrapper[4792]: I1127 17:13:54.305681 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0d1b4929edcf15d9cc06589bbd75a4bdf3d6f077afa268b691d0b6fd413c0c1c"} Nov 27 17:13:54 crc kubenswrapper[4792]: I1127 17:13:54.305742 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4f7251e07220f1a0c0d9c4094dcced415ca0a596bc468d5903a467237f65271f"} Nov 27 17:13:54 crc kubenswrapper[4792]: I1127 17:13:54.305757 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b0bd32a4f0934037370caff3cbb60a6b87e7f930132ed790b024d8e398a0241b"} Nov 27 17:13:54 crc kubenswrapper[4792]: I1127 17:13:54.306069 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="416187e8-58ba-45d1-972c-5c2fea1afd90" Nov 27 17:13:54 crc kubenswrapper[4792]: I1127 17:13:54.306095 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="416187e8-58ba-45d1-972c-5c2fea1afd90" Nov 27 17:13:56 crc kubenswrapper[4792]: I1127 17:13:56.702246 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:56 crc kubenswrapper[4792]: I1127 17:13:56.702573 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:56 crc kubenswrapper[4792]: I1127 17:13:56.710320 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:58 crc kubenswrapper[4792]: I1127 17:13:58.815748 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:13:58 crc kubenswrapper[4792]: I1127 17:13:58.826833 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:13:59 crc kubenswrapper[4792]: I1127 17:13:59.338687 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:13:59 crc kubenswrapper[4792]: I1127 17:13:59.349729 4792 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:13:59 crc kubenswrapper[4792]: I1127 17:13:59.515353 4792 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="49bd0de5-1ff2-425d-823d-7d9628cd8845" Nov 27 17:14:00 crc kubenswrapper[4792]: I1127 17:14:00.342993 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:14:00 crc kubenswrapper[4792]: I1127 17:14:00.342995 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="416187e8-58ba-45d1-972c-5c2fea1afd90" Nov 27 17:14:00 crc kubenswrapper[4792]: I1127 17:14:00.343041 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="416187e8-58ba-45d1-972c-5c2fea1afd90" Nov 27 17:14:00 crc kubenswrapper[4792]: I1127 17:14:00.346665 4792 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="49bd0de5-1ff2-425d-823d-7d9628cd8845" Nov 27 17:14:00 crc kubenswrapper[4792]: I1127 17:14:00.347628 4792 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://d13d22c4e004835f6ba7340dbd26f8ea7587a8b42ad8883ddf1caaae669ecea8" Nov 27 17:14:00 crc kubenswrapper[4792]: I1127 17:14:00.347695 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:14:01 crc kubenswrapper[4792]: I1127 17:14:01.348317 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="416187e8-58ba-45d1-972c-5c2fea1afd90" Nov 27 17:14:01 crc kubenswrapper[4792]: I1127 17:14:01.348353 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="416187e8-58ba-45d1-972c-5c2fea1afd90" Nov 27 17:14:01 crc kubenswrapper[4792]: I1127 17:14:01.352011 4792 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="49bd0de5-1ff2-425d-823d-7d9628cd8845" Nov 27 17:14:02 crc kubenswrapper[4792]: I1127 17:14:02.352858 4792 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="416187e8-58ba-45d1-972c-5c2fea1afd90" Nov 27 17:14:02 crc kubenswrapper[4792]: I1127 17:14:02.353186 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="416187e8-58ba-45d1-972c-5c2fea1afd90" Nov 27 17:14:02 crc kubenswrapper[4792]: I1127 17:14:02.356921 4792 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="49bd0de5-1ff2-425d-823d-7d9628cd8845" Nov 27 17:14:08 crc kubenswrapper[4792]: I1127 17:14:08.739199 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 27 17:14:09 crc kubenswrapper[4792]: I1127 17:14:09.516521 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 27 17:14:09 crc kubenswrapper[4792]: I1127 17:14:09.972607 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 27 17:14:10 crc kubenswrapper[4792]: I1127 17:14:10.196118 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 27 17:14:10 crc kubenswrapper[4792]: I1127 17:14:10.316712 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 27 17:14:10 crc kubenswrapper[4792]: I1127 17:14:10.354109 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 27 17:14:10 crc kubenswrapper[4792]: I1127 17:14:10.649271 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 27 17:14:11 crc kubenswrapper[4792]: I1127 17:14:11.488996 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 27 17:14:11 crc kubenswrapper[4792]: I1127 17:14:11.506232 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 27 17:14:11 crc kubenswrapper[4792]: I1127 17:14:11.529759 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 27 17:14:11 crc kubenswrapper[4792]: I1127 17:14:11.535671 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 27 17:14:11 crc kubenswrapper[4792]: I1127 17:14:11.582128 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 27 17:14:11 crc kubenswrapper[4792]: I1127 17:14:11.702817 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 27 17:14:11 crc kubenswrapper[4792]: I1127 17:14:11.727481 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 27 17:14:11 crc kubenswrapper[4792]: I1127 17:14:11.769753 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 27 17:14:11 crc kubenswrapper[4792]: I1127 17:14:11.949261 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 27 17:14:12 crc kubenswrapper[4792]: I1127 17:14:12.159546 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 27 17:14:12 crc kubenswrapper[4792]: I1127 17:14:12.210036 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 27 17:14:12 crc kubenswrapper[4792]: I1127 17:14:12.244580 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 27 17:14:12 crc kubenswrapper[4792]: I1127 17:14:12.265380 4792 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 27 17:14:12 crc kubenswrapper[4792]: I1127 17:14:12.556005 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 27 17:14:12 crc kubenswrapper[4792]: I1127 17:14:12.626222 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 27 17:14:12 crc kubenswrapper[4792]: I1127 17:14:12.757736 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 27 17:14:12 crc kubenswrapper[4792]: I1127 17:14:12.765189 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 27 17:14:12 crc kubenswrapper[4792]: I1127 17:14:12.809129 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 27 17:14:12 crc kubenswrapper[4792]: I1127 17:14:12.860928 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 27 17:14:12 crc kubenswrapper[4792]: I1127 17:14:12.861957 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 27 17:14:12 crc kubenswrapper[4792]: I1127 17:14:12.877258 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 27 17:14:12 crc kubenswrapper[4792]: I1127 17:14:12.931092 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 27 17:14:12 crc kubenswrapper[4792]: I1127 17:14:12.973099 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 27 17:14:12 crc kubenswrapper[4792]: I1127 17:14:12.983841 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 27 17:14:13 crc kubenswrapper[4792]: I1127 17:14:13.179601 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 27 17:14:13 crc kubenswrapper[4792]: I1127 17:14:13.189551 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 27 17:14:13 crc kubenswrapper[4792]: I1127 17:14:13.273615 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 27 17:14:13 crc kubenswrapper[4792]: I1127 17:14:13.289340 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 27 17:14:13 crc kubenswrapper[4792]: I1127 17:14:13.340329 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 27 17:14:13 crc kubenswrapper[4792]: I1127 17:14:13.395908 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 27 17:14:13 crc kubenswrapper[4792]: I1127 17:14:13.437723 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 27 17:14:13 crc kubenswrapper[4792]: I1127 17:14:13.538496 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 27 17:14:13 crc kubenswrapper[4792]: I1127 17:14:13.985989 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 27 17:14:13 crc kubenswrapper[4792]: I1127 17:14:13.991936 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.065689 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.152487 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.159916 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.233993 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.253312 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.298617 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.377225 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.538399 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.597670 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.608091 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.618794 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.623814 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.787271 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.915508 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.960421 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.965353 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 27 17:14:14 crc kubenswrapper[4792]: I1127 17:14:14.993195 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.051570 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.053711 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.076297 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.264860 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.327056 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.369230 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.440951 4792 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.443497 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=39.443479165 podStartE2EDuration="39.443479165s" podCreationTimestamp="2025-11-27 17:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:13:59.394478807 +0000 UTC m=+261.737305155" watchObservedRunningTime="2025-11-27 17:14:15.443479165 +0000 UTC m=+277.786305483" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.444995 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-jz8bh"] Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.445044 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.451042 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.459923 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.466614 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.466594926 podStartE2EDuration="16.466594926s" podCreationTimestamp="2025-11-27 17:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:14:15.465326401 +0000 UTC m=+277.808152759" watchObservedRunningTime="2025-11-27 17:14:15.466594926 +0000 UTC m=+277.809421254" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.499805 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.561454 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.604815 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.624706 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.647782 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.652145 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.667409 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.690473 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.718099 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.762558 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.786975 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.844842 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.946209 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 27 17:14:15 crc kubenswrapper[4792]: I1127 17:14:15.983267 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 27 17:14:16 crc kubenswrapper[4792]: I1127 17:14:16.005519 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 27 17:14:16 crc kubenswrapper[4792]: I1127 17:14:16.171183 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 27 17:14:16 crc kubenswrapper[4792]: I1127 17:14:16.203514 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 27 17:14:16 crc kubenswrapper[4792]: I1127 17:14:16.251610 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 27 17:14:16 crc kubenswrapper[4792]: I1127 17:14:16.321944 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 27 17:14:16 crc kubenswrapper[4792]: I1127 17:14:16.327792 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 27 17:14:16 crc kubenswrapper[4792]: I1127 17:14:16.568355 4792 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 27 17:14:16 crc kubenswrapper[4792]: I1127 17:14:16.697551 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8412b381-cbf1-4f9c-8e93-6991812b725d" path="/var/lib/kubelet/pods/8412b381-cbf1-4f9c-8e93-6991812b725d/volumes" Nov 27 17:14:16 crc kubenswrapper[4792]: I1127 17:14:16.858306 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 27 17:14:16 crc kubenswrapper[4792]: I1127 17:14:16.914847 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.134676 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.158046 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.357509 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.386689 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.398393 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.431067 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.486000 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.658211 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.686486 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.699446 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.716757 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.720134 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.739690 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.770359 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.772662 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.776979 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.867777 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.960912 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 27 17:14:17 crc kubenswrapper[4792]: I1127 17:14:17.986548 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.074563 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.079325 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.188552 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.207549 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.245929 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.288363 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.367961 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.381776 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.414343 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.520381 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.639966 4792 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.648180 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.722686 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.830951 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.903023 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.928142 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 27 17:14:18 crc kubenswrapper[4792]: I1127 17:14:18.991559 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.025132 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.111395 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.136661 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.145090 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.152451 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.153551 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.200256 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.250501 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.354229 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.393567 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.446002 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.459892 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.606707 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.628494 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.638568 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.667008 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.668050 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.721467 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.783515 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.826911 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6f9b8778db-ftbcv"] Nov 27 17:14:19 crc kubenswrapper[4792]: E1127 17:14:19.827117 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" containerName="installer" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.827129 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" containerName="installer" Nov 27 17:14:19 crc kubenswrapper[4792]: E1127 17:14:19.827142 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8412b381-cbf1-4f9c-8e93-6991812b725d" containerName="oauth-openshift" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.827148 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8412b381-cbf1-4f9c-8e93-6991812b725d" containerName="oauth-openshift" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.827247 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8412b381-cbf1-4f9c-8e93-6991812b725d" containerName="oauth-openshift" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.827260 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf54b70-6a9d-43cf-829a-35d0abe3bcc0" containerName="installer" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.827610 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.830379 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.830475 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.830546 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.830553 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.830977 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.830982 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.831096 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.831239 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.831292 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.831508 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.831626 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.833663 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.843922 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.846419 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.847536 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f9b8778db-ftbcv"] Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.850426 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.860334 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.860377 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-user-template-login\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.860396 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.860417 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf4j2\" (UniqueName: \"kubernetes.io/projected/f41d5a38-16fb-489d-9095-bc0723d3f2f1-kube-api-access-zf4j2\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.860436 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-session\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.860475 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.860503 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.860534 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.860564 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f41d5a38-16fb-489d-9095-bc0723d3f2f1-audit-policies\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.860584 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.860617 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.860634 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.860669 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f41d5a38-16fb-489d-9095-bc0723d3f2f1-audit-dir\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.860687 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-user-template-error\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.877248 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.934167 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.961284 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf4j2\" (UniqueName: \"kubernetes.io/projected/f41d5a38-16fb-489d-9095-bc0723d3f2f1-kube-api-access-zf4j2\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.961322 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-session\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.961355 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.961382 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.961412 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.961432 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f41d5a38-16fb-489d-9095-bc0723d3f2f1-audit-policies\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.961454 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.961486 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.961502 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.961519 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f41d5a38-16fb-489d-9095-bc0723d3f2f1-audit-dir\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.961535 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-user-template-error\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.961557 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.961577 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-user-template-login\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.961592 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.964436 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f41d5a38-16fb-489d-9095-bc0723d3f2f1-audit-dir\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.964876 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.965203 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f41d5a38-16fb-489d-9095-bc0723d3f2f1-audit-policies\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.965632 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.966037 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.968352 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.968356 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-user-template-login\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.968443 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.968926 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.969067 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.969089 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-user-template-error\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.969239 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.975200 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f41d5a38-16fb-489d-9095-bc0723d3f2f1-v4-0-config-system-session\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.983139 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf4j2\" (UniqueName: \"kubernetes.io/projected/f41d5a38-16fb-489d-9095-bc0723d3f2f1-kube-api-access-zf4j2\") pod \"oauth-openshift-6f9b8778db-ftbcv\" (UID: \"f41d5a38-16fb-489d-9095-bc0723d3f2f1\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:19 crc kubenswrapper[4792]: I1127 17:14:19.999254 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.057534 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.137849 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.140095 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.143728 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.167261 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.203474 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.242869 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.249517 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.255046 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.267827 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.326925 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.328454 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.328558 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f9b8778db-ftbcv"] Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.389259 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.407523 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.429503 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.462012 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" event={"ID":"f41d5a38-16fb-489d-9095-bc0723d3f2f1","Type":"ContainerStarted","Data":"4cbbe9854b44f17c4e8d3cab4ccce43f14558915910701635740878973df060d"} Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.579129 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.592795 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.597741 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.720008 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.737866 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.807163 4792 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.807403 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://3e2aa632f9ef5fea629b1c21da4efd0da68b9964244922390261a06e590bf80d" gracePeriod=5 Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.829613 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.832998 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.840155 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.862859 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.867203 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 27 17:14:20 crc kubenswrapper[4792]: I1127 17:14:20.931325 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.082229 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.153569 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.349810 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.350969 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.385527 4792 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.458343 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.468498 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" event={"ID":"f41d5a38-16fb-489d-9095-bc0723d3f2f1","Type":"ContainerStarted","Data":"49d91604954861b1a66c6cecf2b8a752d4049f1040d207bf7fdd1de4d4eb7796"} Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.468854 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.479441 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.486766 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.496335 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6f9b8778db-ftbcv" podStartSLOduration=63.496286128 podStartE2EDuration="1m3.496286128s" podCreationTimestamp="2025-11-27 17:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:14:21.492441784 +0000 UTC m=+283.835268102" watchObservedRunningTime="2025-11-27 17:14:21.496286128 +0000 UTC m=+283.839112446" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.529303 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.617522 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.735799 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.787424 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.789521 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.794318 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.828449 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.850050 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.889115 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.899413 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.954328 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 27 17:14:21 crc kubenswrapper[4792]: I1127 17:14:21.972838 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.009968 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.090831 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.106150 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.150908 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.150908 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.253371 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.342707 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.406925 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.454678 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.455452 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.499166 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.618322 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.654908 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.686679 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.732700 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 27 17:14:22 crc kubenswrapper[4792]: I1127 17:14:22.913462 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 27 17:14:23 crc kubenswrapper[4792]: I1127 17:14:23.052614 4792 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 27 17:14:23 crc kubenswrapper[4792]: I1127 17:14:23.139569 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 27 17:14:23 crc kubenswrapper[4792]: I1127 17:14:23.141177 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 27 17:14:23 crc kubenswrapper[4792]: I1127 17:14:23.222058 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 27 17:14:23 crc kubenswrapper[4792]: I1127 17:14:23.303275 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 27 17:14:23 crc kubenswrapper[4792]: I1127 17:14:23.333145 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 27 17:14:23 crc kubenswrapper[4792]: I1127 17:14:23.437285 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 27 17:14:23 crc kubenswrapper[4792]: I1127 17:14:23.600029 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 27 17:14:23 crc kubenswrapper[4792]: I1127 17:14:23.622540 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 27 17:14:23 crc kubenswrapper[4792]: I1127 17:14:23.788717 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 27 17:14:23 crc kubenswrapper[4792]: I1127 17:14:23.819424 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 27 17:14:23 crc kubenswrapper[4792]: I1127 17:14:23.855588 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 27 17:14:23 crc kubenswrapper[4792]: I1127 17:14:23.897992 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 27 17:14:23 crc kubenswrapper[4792]: I1127 17:14:23.952275 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 27 17:14:24 crc kubenswrapper[4792]: I1127 17:14:24.191394 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 27 17:14:24 crc kubenswrapper[4792]: I1127 17:14:24.259580 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 27 17:14:24 crc kubenswrapper[4792]: I1127 17:14:24.290579 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 27 17:14:24 crc kubenswrapper[4792]: I1127 17:14:24.513452 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 27 17:14:24 crc kubenswrapper[4792]: I1127 17:14:24.726111 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 27 17:14:24 crc kubenswrapper[4792]: I1127 17:14:24.759485 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 27 17:14:24 crc kubenswrapper[4792]: I1127 17:14:24.839295 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 27 17:14:24 crc kubenswrapper[4792]: I1127 17:14:24.854839 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 27 17:14:24 crc kubenswrapper[4792]: I1127 17:14:24.992562 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 27 17:14:25 crc kubenswrapper[4792]: I1127 17:14:25.087273 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 27 17:14:25 crc kubenswrapper[4792]: I1127 17:14:25.176366 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 27 17:14:25 crc kubenswrapper[4792]: I1127 17:14:25.503048 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 27 17:14:25 crc kubenswrapper[4792]: I1127 17:14:25.923638 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 27 17:14:25 crc kubenswrapper[4792]: I1127 17:14:25.923804 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:14:25 crc kubenswrapper[4792]: I1127 17:14:25.932170 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 27 17:14:25 crc kubenswrapper[4792]: I1127 17:14:25.932236 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 27 17:14:25 crc kubenswrapper[4792]: I1127 17:14:25.932250 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:14:25 crc kubenswrapper[4792]: I1127 17:14:25.932423 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:14:25 crc kubenswrapper[4792]: I1127 17:14:25.932740 4792 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:25 crc kubenswrapper[4792]: I1127 17:14:25.932761 4792 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.033737 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.033972 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.033841 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.034049 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.034196 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.034417 4792 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.034459 4792 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.044888 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.071767 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.073614 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.135859 4792 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.191526 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.498945 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.499016 4792 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="3e2aa632f9ef5fea629b1c21da4efd0da68b9964244922390261a06e590bf80d" exitCode=137 Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.499063 4792 scope.go:117] "RemoveContainer" containerID="3e2aa632f9ef5fea629b1c21da4efd0da68b9964244922390261a06e590bf80d" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.499124 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.527984 4792 scope.go:117] "RemoveContainer" containerID="3e2aa632f9ef5fea629b1c21da4efd0da68b9964244922390261a06e590bf80d" Nov 27 17:14:26 crc kubenswrapper[4792]: E1127 17:14:26.528884 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e2aa632f9ef5fea629b1c21da4efd0da68b9964244922390261a06e590bf80d\": container with ID starting with 3e2aa632f9ef5fea629b1c21da4efd0da68b9964244922390261a06e590bf80d not found: ID does not exist" containerID="3e2aa632f9ef5fea629b1c21da4efd0da68b9964244922390261a06e590bf80d" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.528931 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e2aa632f9ef5fea629b1c21da4efd0da68b9964244922390261a06e590bf80d"} err="failed to get container status \"3e2aa632f9ef5fea629b1c21da4efd0da68b9964244922390261a06e590bf80d\": rpc error: code = NotFound desc = could not find container \"3e2aa632f9ef5fea629b1c21da4efd0da68b9964244922390261a06e590bf80d\": container with ID starting with 3e2aa632f9ef5fea629b1c21da4efd0da68b9964244922390261a06e590bf80d not found: ID does not exist" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.706594 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.708236 4792 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.741250 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.741380 4792 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="460db256-81c8-414c-8ffc-42af9a5532db" Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.753475 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 27 17:14:26 crc kubenswrapper[4792]: I1127 17:14:26.753545 4792 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="460db256-81c8-414c-8ffc-42af9a5532db" Nov 27 17:14:39 crc kubenswrapper[4792]: I1127 17:14:39.825091 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw"] Nov 27 17:14:39 crc kubenswrapper[4792]: E1127 17:14:39.825901 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 27 17:14:39 crc kubenswrapper[4792]: I1127 17:14:39.825917 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 27 17:14:39 crc kubenswrapper[4792]: I1127 17:14:39.826033 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 27 17:14:39 crc kubenswrapper[4792]: I1127 17:14:39.826488 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw" Nov 27 17:14:39 crc kubenswrapper[4792]: I1127 17:14:39.828868 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Nov 27 17:14:39 crc kubenswrapper[4792]: I1127 17:14:39.829093 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Nov 27 17:14:39 crc kubenswrapper[4792]: I1127 17:14:39.829328 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Nov 27 17:14:39 crc kubenswrapper[4792]: I1127 17:14:39.829681 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Nov 27 17:14:39 crc kubenswrapper[4792]: I1127 17:14:39.830167 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Nov 27 17:14:39 crc kubenswrapper[4792]: I1127 17:14:39.837385 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw"] Nov 27 17:14:40 crc kubenswrapper[4792]: I1127 17:14:40.026556 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv49z\" (UniqueName: \"kubernetes.io/projected/f55bbedd-795c-4fb8-906f-46193cb44bad-kube-api-access-qv49z\") pod \"cluster-monitoring-operator-6d5b84845-rxsrw\" (UID: \"f55bbedd-795c-4fb8-906f-46193cb44bad\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw" Nov 27 17:14:40 crc kubenswrapper[4792]: I1127 17:14:40.027149 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f55bbedd-795c-4fb8-906f-46193cb44bad-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-rxsrw\" (UID: \"f55bbedd-795c-4fb8-906f-46193cb44bad\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw" Nov 27 17:14:40 crc kubenswrapper[4792]: I1127 17:14:40.027474 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f55bbedd-795c-4fb8-906f-46193cb44bad-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-rxsrw\" (UID: \"f55bbedd-795c-4fb8-906f-46193cb44bad\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw" Nov 27 17:14:40 crc kubenswrapper[4792]: I1127 17:14:40.128631 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv49z\" (UniqueName: \"kubernetes.io/projected/f55bbedd-795c-4fb8-906f-46193cb44bad-kube-api-access-qv49z\") pod \"cluster-monitoring-operator-6d5b84845-rxsrw\" (UID: \"f55bbedd-795c-4fb8-906f-46193cb44bad\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw" Nov 27 17:14:40 crc kubenswrapper[4792]: I1127 17:14:40.128816 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f55bbedd-795c-4fb8-906f-46193cb44bad-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-rxsrw\" (UID: \"f55bbedd-795c-4fb8-906f-46193cb44bad\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw" Nov 27 17:14:40 crc kubenswrapper[4792]: I1127 17:14:40.128943 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f55bbedd-795c-4fb8-906f-46193cb44bad-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-rxsrw\" (UID: \"f55bbedd-795c-4fb8-906f-46193cb44bad\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw" Nov 27 17:14:40 crc kubenswrapper[4792]: I1127 17:14:40.130987 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/f55bbedd-795c-4fb8-906f-46193cb44bad-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-rxsrw\" (UID: \"f55bbedd-795c-4fb8-906f-46193cb44bad\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw" Nov 27 17:14:40 crc kubenswrapper[4792]: I1127 17:14:40.136422 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/f55bbedd-795c-4fb8-906f-46193cb44bad-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-rxsrw\" (UID: \"f55bbedd-795c-4fb8-906f-46193cb44bad\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw" Nov 27 17:14:40 crc kubenswrapper[4792]: I1127 17:14:40.150261 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv49z\" (UniqueName: \"kubernetes.io/projected/f55bbedd-795c-4fb8-906f-46193cb44bad-kube-api-access-qv49z\") pod \"cluster-monitoring-operator-6d5b84845-rxsrw\" (UID: \"f55bbedd-795c-4fb8-906f-46193cb44bad\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw" Nov 27 17:14:40 crc kubenswrapper[4792]: I1127 17:14:40.446890 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw" Nov 27 17:14:40 crc kubenswrapper[4792]: I1127 17:14:40.624499 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw"] Nov 27 17:14:41 crc kubenswrapper[4792]: I1127 17:14:41.588797 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw" event={"ID":"f55bbedd-795c-4fb8-906f-46193cb44bad","Type":"ContainerStarted","Data":"2d6cba906aa51533e0e6dd813190acbda8efd572e7227d19ae6e2309320ea14f"} Nov 27 17:14:43 crc kubenswrapper[4792]: I1127 17:14:43.602080 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw" event={"ID":"f55bbedd-795c-4fb8-906f-46193cb44bad","Type":"ContainerStarted","Data":"3ef603601da299a39f6b53e78f4c36f0fdd2854d972c448b64960fa3701f418a"} Nov 27 17:14:43 crc kubenswrapper[4792]: I1127 17:14:43.606916 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm"] Nov 27 17:14:43 crc kubenswrapper[4792]: I1127 17:14:43.607757 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" Nov 27 17:14:43 crc kubenswrapper[4792]: I1127 17:14:43.610224 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Nov 27 17:14:43 crc kubenswrapper[4792]: I1127 17:14:43.616334 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm"] Nov 27 17:14:43 crc kubenswrapper[4792]: I1127 17:14:43.625775 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-rxsrw" podStartSLOduration=2.238883878 podStartE2EDuration="4.625753466s" podCreationTimestamp="2025-11-27 17:14:39 +0000 UTC" firstStartedPulling="2025-11-27 17:14:40.631188648 +0000 UTC m=+302.974014966" lastFinishedPulling="2025-11-27 17:14:43.018058226 +0000 UTC m=+305.360884554" observedRunningTime="2025-11-27 17:14:43.624027779 +0000 UTC m=+305.966854097" watchObservedRunningTime="2025-11-27 17:14:43.625753466 +0000 UTC m=+305.968579774" Nov 27 17:14:43 crc kubenswrapper[4792]: I1127 17:14:43.670958 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-xmhbm\" (UID: \"7a6db3f4-940f-4d6f-892d-d6f5003bd881\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" Nov 27 17:14:43 crc kubenswrapper[4792]: I1127 17:14:43.772285 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-xmhbm\" (UID: \"7a6db3f4-940f-4d6f-892d-d6f5003bd881\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" Nov 27 17:14:43 crc kubenswrapper[4792]: E1127 17:14:43.772545 4792 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Nov 27 17:14:43 crc kubenswrapper[4792]: E1127 17:14:43.773335 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates podName:7a6db3f4-940f-4d6f-892d-d6f5003bd881 nodeName:}" failed. No retries permitted until 2025-11-27 17:14:44.273298264 +0000 UTC m=+306.616124622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-xmhbm" (UID: "7a6db3f4-940f-4d6f-892d-d6f5003bd881") : secret "prometheus-operator-admission-webhook-tls" not found Nov 27 17:14:44 crc kubenswrapper[4792]: I1127 17:14:44.277618 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-xmhbm\" (UID: \"7a6db3f4-940f-4d6f-892d-d6f5003bd881\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" Nov 27 17:14:44 crc kubenswrapper[4792]: E1127 17:14:44.277824 4792 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Nov 27 17:14:44 crc kubenswrapper[4792]: E1127 17:14:44.277916 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates podName:7a6db3f4-940f-4d6f-892d-d6f5003bd881 nodeName:}" failed. No retries permitted until 2025-11-27 17:14:45.277892628 +0000 UTC m=+307.620718956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-xmhbm" (UID: "7a6db3f4-940f-4d6f-892d-d6f5003bd881") : secret "prometheus-operator-admission-webhook-tls" not found Nov 27 17:14:45 crc kubenswrapper[4792]: I1127 17:14:45.291287 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-xmhbm\" (UID: \"7a6db3f4-940f-4d6f-892d-d6f5003bd881\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" Nov 27 17:14:45 crc kubenswrapper[4792]: E1127 17:14:45.291543 4792 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Nov 27 17:14:45 crc kubenswrapper[4792]: E1127 17:14:45.292536 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates podName:7a6db3f4-940f-4d6f-892d-d6f5003bd881 nodeName:}" failed. No retries permitted until 2025-11-27 17:14:47.292519707 +0000 UTC m=+309.635346025 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-xmhbm" (UID: "7a6db3f4-940f-4d6f-892d-d6f5003bd881") : secret "prometheus-operator-admission-webhook-tls" not found Nov 27 17:14:47 crc kubenswrapper[4792]: I1127 17:14:47.316126 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-xmhbm\" (UID: \"7a6db3f4-940f-4d6f-892d-d6f5003bd881\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" Nov 27 17:14:47 crc kubenswrapper[4792]: E1127 17:14:47.316369 4792 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Nov 27 17:14:47 crc kubenswrapper[4792]: E1127 17:14:47.316463 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates podName:7a6db3f4-940f-4d6f-892d-d6f5003bd881 nodeName:}" failed. No retries permitted until 2025-11-27 17:14:51.316440927 +0000 UTC m=+313.659267265 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-xmhbm" (UID: "7a6db3f4-940f-4d6f-892d-d6f5003bd881") : secret "prometheus-operator-admission-webhook-tls" not found Nov 27 17:14:48 crc kubenswrapper[4792]: I1127 17:14:48.565867 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8gb65"] Nov 27 17:14:48 crc kubenswrapper[4792]: I1127 17:14:48.566100 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" podUID="00795f3d-b1b4-494b-8898-380798319532" containerName="controller-manager" containerID="cri-o://4540391e49b8b5e167438da387815eed9ffe635be654e7f3dd4d64a58eef86af" gracePeriod=30 Nov 27 17:14:48 crc kubenswrapper[4792]: I1127 17:14:48.665963 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr"] Nov 27 17:14:48 crc kubenswrapper[4792]: I1127 17:14:48.666434 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" podUID="34727a2e-1e3a-4371-9052-7df4c6693f44" containerName="route-controller-manager" containerID="cri-o://bc2d6227c3e8c46115f2e459dae6891bd49e9f754e41012903561e707ce84078" gracePeriod=30 Nov 27 17:14:48 crc kubenswrapper[4792]: I1127 17:14:48.882699 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:14:48 crc kubenswrapper[4792]: I1127 17:14:48.992515 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.043062 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00795f3d-b1b4-494b-8898-380798319532-serving-cert\") pod \"00795f3d-b1b4-494b-8898-380798319532\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.043172 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-client-ca\") pod \"00795f3d-b1b4-494b-8898-380798319532\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.043214 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-config\") pod \"00795f3d-b1b4-494b-8898-380798319532\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.043239 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-proxy-ca-bundles\") pod \"00795f3d-b1b4-494b-8898-380798319532\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.043262 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8s7l\" (UniqueName: \"kubernetes.io/projected/00795f3d-b1b4-494b-8898-380798319532-kube-api-access-p8s7l\") pod \"00795f3d-b1b4-494b-8898-380798319532\" (UID: \"00795f3d-b1b4-494b-8898-380798319532\") " Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.044366 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "00795f3d-b1b4-494b-8898-380798319532" (UID: "00795f3d-b1b4-494b-8898-380798319532"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.044392 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-client-ca" (OuterVolumeSpecName: "client-ca") pod "00795f3d-b1b4-494b-8898-380798319532" (UID: "00795f3d-b1b4-494b-8898-380798319532"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.044750 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-config" (OuterVolumeSpecName: "config") pod "00795f3d-b1b4-494b-8898-380798319532" (UID: "00795f3d-b1b4-494b-8898-380798319532"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.048633 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00795f3d-b1b4-494b-8898-380798319532-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "00795f3d-b1b4-494b-8898-380798319532" (UID: "00795f3d-b1b4-494b-8898-380798319532"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.048756 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00795f3d-b1b4-494b-8898-380798319532-kube-api-access-p8s7l" (OuterVolumeSpecName: "kube-api-access-p8s7l") pod "00795f3d-b1b4-494b-8898-380798319532" (UID: "00795f3d-b1b4-494b-8898-380798319532"). InnerVolumeSpecName "kube-api-access-p8s7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.146048 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34727a2e-1e3a-4371-9052-7df4c6693f44-config\") pod \"34727a2e-1e3a-4371-9052-7df4c6693f44\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.146133 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34727a2e-1e3a-4371-9052-7df4c6693f44-serving-cert\") pod \"34727a2e-1e3a-4371-9052-7df4c6693f44\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.146189 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5r4q\" (UniqueName: \"kubernetes.io/projected/34727a2e-1e3a-4371-9052-7df4c6693f44-kube-api-access-d5r4q\") pod \"34727a2e-1e3a-4371-9052-7df4c6693f44\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.146227 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34727a2e-1e3a-4371-9052-7df4c6693f44-client-ca\") pod \"34727a2e-1e3a-4371-9052-7df4c6693f44\" (UID: \"34727a2e-1e3a-4371-9052-7df4c6693f44\") " Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.146400 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00795f3d-b1b4-494b-8898-380798319532-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.146449 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-client-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.146463 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.146474 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00795f3d-b1b4-494b-8898-380798319532-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.146489 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8s7l\" (UniqueName: \"kubernetes.io/projected/00795f3d-b1b4-494b-8898-380798319532-kube-api-access-p8s7l\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.146981 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34727a2e-1e3a-4371-9052-7df4c6693f44-client-ca" (OuterVolumeSpecName: "client-ca") pod "34727a2e-1e3a-4371-9052-7df4c6693f44" (UID: "34727a2e-1e3a-4371-9052-7df4c6693f44"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.147006 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34727a2e-1e3a-4371-9052-7df4c6693f44-config" (OuterVolumeSpecName: "config") pod "34727a2e-1e3a-4371-9052-7df4c6693f44" (UID: "34727a2e-1e3a-4371-9052-7df4c6693f44"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.148877 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34727a2e-1e3a-4371-9052-7df4c6693f44-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "34727a2e-1e3a-4371-9052-7df4c6693f44" (UID: "34727a2e-1e3a-4371-9052-7df4c6693f44"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.149706 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34727a2e-1e3a-4371-9052-7df4c6693f44-kube-api-access-d5r4q" (OuterVolumeSpecName: "kube-api-access-d5r4q") pod "34727a2e-1e3a-4371-9052-7df4c6693f44" (UID: "34727a2e-1e3a-4371-9052-7df4c6693f44"). InnerVolumeSpecName "kube-api-access-d5r4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.247379 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34727a2e-1e3a-4371-9052-7df4c6693f44-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.247411 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34727a2e-1e3a-4371-9052-7df4c6693f44-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.247424 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5r4q\" (UniqueName: \"kubernetes.io/projected/34727a2e-1e3a-4371-9052-7df4c6693f44-kube-api-access-d5r4q\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.247434 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34727a2e-1e3a-4371-9052-7df4c6693f44-client-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.635843 4792 generic.go:334] "Generic (PLEG): container finished" podID="34727a2e-1e3a-4371-9052-7df4c6693f44" containerID="bc2d6227c3e8c46115f2e459dae6891bd49e9f754e41012903561e707ce84078" exitCode=0 Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.635895 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.635920 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" event={"ID":"34727a2e-1e3a-4371-9052-7df4c6693f44","Type":"ContainerDied","Data":"bc2d6227c3e8c46115f2e459dae6891bd49e9f754e41012903561e707ce84078"} Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.635951 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr" event={"ID":"34727a2e-1e3a-4371-9052-7df4c6693f44","Type":"ContainerDied","Data":"6afbf23f932f8a0366155e396aed428496389639bf6ffef251369fa7c2a4aa6a"} Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.635972 4792 scope.go:117] "RemoveContainer" containerID="bc2d6227c3e8c46115f2e459dae6891bd49e9f754e41012903561e707ce84078" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.638194 4792 generic.go:334] "Generic (PLEG): container finished" podID="00795f3d-b1b4-494b-8898-380798319532" containerID="4540391e49b8b5e167438da387815eed9ffe635be654e7f3dd4d64a58eef86af" exitCode=0 Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.638233 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" event={"ID":"00795f3d-b1b4-494b-8898-380798319532","Type":"ContainerDied","Data":"4540391e49b8b5e167438da387815eed9ffe635be654e7f3dd4d64a58eef86af"} Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.638252 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" event={"ID":"00795f3d-b1b4-494b-8898-380798319532","Type":"ContainerDied","Data":"744d899056391a7de511e27cbc7a86f2993305fba44b023243bb04ff3b293f0d"} Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.638460 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8gb65" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.666140 4792 scope.go:117] "RemoveContainer" containerID="bc2d6227c3e8c46115f2e459dae6891bd49e9f754e41012903561e707ce84078" Nov 27 17:14:49 crc kubenswrapper[4792]: E1127 17:14:49.666592 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc2d6227c3e8c46115f2e459dae6891bd49e9f754e41012903561e707ce84078\": container with ID starting with bc2d6227c3e8c46115f2e459dae6891bd49e9f754e41012903561e707ce84078 not found: ID does not exist" containerID="bc2d6227c3e8c46115f2e459dae6891bd49e9f754e41012903561e707ce84078" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.666750 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc2d6227c3e8c46115f2e459dae6891bd49e9f754e41012903561e707ce84078"} err="failed to get container status \"bc2d6227c3e8c46115f2e459dae6891bd49e9f754e41012903561e707ce84078\": rpc error: code = NotFound desc = could not find container \"bc2d6227c3e8c46115f2e459dae6891bd49e9f754e41012903561e707ce84078\": container with ID starting with bc2d6227c3e8c46115f2e459dae6891bd49e9f754e41012903561e707ce84078 not found: ID does not exist" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.666827 4792 scope.go:117] "RemoveContainer" containerID="4540391e49b8b5e167438da387815eed9ffe635be654e7f3dd4d64a58eef86af" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.667081 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8gb65"] Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.673689 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8gb65"] Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.683071 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr"] Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.686594 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-klkjr"] Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.695943 4792 scope.go:117] "RemoveContainer" containerID="4540391e49b8b5e167438da387815eed9ffe635be654e7f3dd4d64a58eef86af" Nov 27 17:14:49 crc kubenswrapper[4792]: E1127 17:14:49.696606 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4540391e49b8b5e167438da387815eed9ffe635be654e7f3dd4d64a58eef86af\": container with ID starting with 4540391e49b8b5e167438da387815eed9ffe635be654e7f3dd4d64a58eef86af not found: ID does not exist" containerID="4540391e49b8b5e167438da387815eed9ffe635be654e7f3dd4d64a58eef86af" Nov 27 17:14:49 crc kubenswrapper[4792]: I1127 17:14:49.696709 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4540391e49b8b5e167438da387815eed9ffe635be654e7f3dd4d64a58eef86af"} err="failed to get container status \"4540391e49b8b5e167438da387815eed9ffe635be654e7f3dd4d64a58eef86af\": rpc error: code = NotFound desc = could not find container \"4540391e49b8b5e167438da387815eed9ffe635be654e7f3dd4d64a58eef86af\": container with ID starting with 4540391e49b8b5e167438da387815eed9ffe635be654e7f3dd4d64a58eef86af not found: ID does not exist" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.134741 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc"] Nov 27 17:14:50 crc kubenswrapper[4792]: E1127 17:14:50.134997 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34727a2e-1e3a-4371-9052-7df4c6693f44" containerName="route-controller-manager" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.135013 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="34727a2e-1e3a-4371-9052-7df4c6693f44" containerName="route-controller-manager" Nov 27 17:14:50 crc kubenswrapper[4792]: E1127 17:14:50.135031 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00795f3d-b1b4-494b-8898-380798319532" containerName="controller-manager" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.135039 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="00795f3d-b1b4-494b-8898-380798319532" containerName="controller-manager" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.135161 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="00795f3d-b1b4-494b-8898-380798319532" containerName="controller-manager" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.135176 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="34727a2e-1e3a-4371-9052-7df4c6693f44" containerName="route-controller-manager" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.135547 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.136852 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.137162 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.137529 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.137810 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.137887 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.139131 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.139539 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk"] Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.142827 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.144567 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc"] Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.144957 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.145559 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.148222 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.148487 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.148656 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.149184 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.151701 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.153191 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk"] Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.263428 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72gs4\" (UniqueName: \"kubernetes.io/projected/8721ba87-b116-4a78-a6f3-1b7bd4011272-kube-api-access-72gs4\") pod \"controller-manager-7dfc5f568d-jjmnk\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.263508 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-proxy-ca-bundles\") pod \"controller-manager-7dfc5f568d-jjmnk\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.263532 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/454b5a49-614e-4dfc-bd31-0400a5776bb3-client-ca\") pod \"route-controller-manager-77d7f7ff5b-2szwc\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.263559 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptntt\" (UniqueName: \"kubernetes.io/projected/454b5a49-614e-4dfc-bd31-0400a5776bb3-kube-api-access-ptntt\") pod \"route-controller-manager-77d7f7ff5b-2szwc\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.263998 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-client-ca\") pod \"controller-manager-7dfc5f568d-jjmnk\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.264076 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/454b5a49-614e-4dfc-bd31-0400a5776bb3-serving-cert\") pod \"route-controller-manager-77d7f7ff5b-2szwc\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.264127 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8721ba87-b116-4a78-a6f3-1b7bd4011272-serving-cert\") pod \"controller-manager-7dfc5f568d-jjmnk\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.264155 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/454b5a49-614e-4dfc-bd31-0400a5776bb3-config\") pod \"route-controller-manager-77d7f7ff5b-2szwc\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.264178 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-config\") pod \"controller-manager-7dfc5f568d-jjmnk\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.364943 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/454b5a49-614e-4dfc-bd31-0400a5776bb3-serving-cert\") pod \"route-controller-manager-77d7f7ff5b-2szwc\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.365014 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8721ba87-b116-4a78-a6f3-1b7bd4011272-serving-cert\") pod \"controller-manager-7dfc5f568d-jjmnk\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.365035 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-config\") pod \"controller-manager-7dfc5f568d-jjmnk\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.365055 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/454b5a49-614e-4dfc-bd31-0400a5776bb3-config\") pod \"route-controller-manager-77d7f7ff5b-2szwc\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.365094 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72gs4\" (UniqueName: \"kubernetes.io/projected/8721ba87-b116-4a78-a6f3-1b7bd4011272-kube-api-access-72gs4\") pod \"controller-manager-7dfc5f568d-jjmnk\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.366433 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/454b5a49-614e-4dfc-bd31-0400a5776bb3-config\") pod \"route-controller-manager-77d7f7ff5b-2szwc\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.366519 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-proxy-ca-bundles\") pod \"controller-manager-7dfc5f568d-jjmnk\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.366520 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-config\") pod \"controller-manager-7dfc5f568d-jjmnk\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.366548 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/454b5a49-614e-4dfc-bd31-0400a5776bb3-client-ca\") pod \"route-controller-manager-77d7f7ff5b-2szwc\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.366610 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptntt\" (UniqueName: \"kubernetes.io/projected/454b5a49-614e-4dfc-bd31-0400a5776bb3-kube-api-access-ptntt\") pod \"route-controller-manager-77d7f7ff5b-2szwc\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.366637 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-client-ca\") pod \"controller-manager-7dfc5f568d-jjmnk\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.367187 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/454b5a49-614e-4dfc-bd31-0400a5776bb3-client-ca\") pod \"route-controller-manager-77d7f7ff5b-2szwc\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.367283 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-client-ca\") pod \"controller-manager-7dfc5f568d-jjmnk\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.367365 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-proxy-ca-bundles\") pod \"controller-manager-7dfc5f568d-jjmnk\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.368619 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8721ba87-b116-4a78-a6f3-1b7bd4011272-serving-cert\") pod \"controller-manager-7dfc5f568d-jjmnk\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.368717 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/454b5a49-614e-4dfc-bd31-0400a5776bb3-serving-cert\") pod \"route-controller-manager-77d7f7ff5b-2szwc\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.385502 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptntt\" (UniqueName: \"kubernetes.io/projected/454b5a49-614e-4dfc-bd31-0400a5776bb3-kube-api-access-ptntt\") pod \"route-controller-manager-77d7f7ff5b-2szwc\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.387207 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72gs4\" (UniqueName: \"kubernetes.io/projected/8721ba87-b116-4a78-a6f3-1b7bd4011272-kube-api-access-72gs4\") pod \"controller-manager-7dfc5f568d-jjmnk\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.458619 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.471937 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.637621 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc"] Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.694493 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00795f3d-b1b4-494b-8898-380798319532" path="/var/lib/kubelet/pods/00795f3d-b1b4-494b-8898-380798319532/volumes" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.695362 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34727a2e-1e3a-4371-9052-7df4c6693f44" path="/var/lib/kubelet/pods/34727a2e-1e3a-4371-9052-7df4c6693f44/volumes" Nov 27 17:14:50 crc kubenswrapper[4792]: I1127 17:14:50.712064 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk"] Nov 27 17:14:51 crc kubenswrapper[4792]: I1127 17:14:51.377739 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-xmhbm\" (UID: \"7a6db3f4-940f-4d6f-892d-d6f5003bd881\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" Nov 27 17:14:51 crc kubenswrapper[4792]: E1127 17:14:51.377912 4792 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Nov 27 17:14:51 crc kubenswrapper[4792]: E1127 17:14:51.378220 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates podName:7a6db3f4-940f-4d6f-892d-d6f5003bd881 nodeName:}" failed. No retries permitted until 2025-11-27 17:14:59.378190608 +0000 UTC m=+321.721016926 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-xmhbm" (UID: "7a6db3f4-940f-4d6f-892d-d6f5003bd881") : secret "prometheus-operator-admission-webhook-tls" not found Nov 27 17:14:51 crc kubenswrapper[4792]: I1127 17:14:51.654464 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" event={"ID":"8721ba87-b116-4a78-a6f3-1b7bd4011272","Type":"ContainerStarted","Data":"192feab5f9982fd55d90c8a3d788c798851c5934c620a3d1f63de883e980840d"} Nov 27 17:14:51 crc kubenswrapper[4792]: I1127 17:14:51.654519 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" event={"ID":"8721ba87-b116-4a78-a6f3-1b7bd4011272","Type":"ContainerStarted","Data":"ec3eac6b25c13d1e9fec915c0b31482ef038a07b5cae470677f077f4ca5ed03a"} Nov 27 17:14:51 crc kubenswrapper[4792]: I1127 17:14:51.654856 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:51 crc kubenswrapper[4792]: I1127 17:14:51.656568 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" event={"ID":"454b5a49-614e-4dfc-bd31-0400a5776bb3","Type":"ContainerStarted","Data":"a627825a60808b984a1ccd83f137d2140aec1bff4bbccf80c2560f91b72073f7"} Nov 27 17:14:51 crc kubenswrapper[4792]: I1127 17:14:51.656610 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" event={"ID":"454b5a49-614e-4dfc-bd31-0400a5776bb3","Type":"ContainerStarted","Data":"3bd702cf0a680b903d36c766b8a63e20a144634c8c7bb80e23391466b4afbead"} Nov 27 17:14:51 crc kubenswrapper[4792]: I1127 17:14:51.656788 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:51 crc kubenswrapper[4792]: I1127 17:14:51.661393 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:51 crc kubenswrapper[4792]: I1127 17:14:51.661924 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:51 crc kubenswrapper[4792]: I1127 17:14:51.673585 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" podStartSLOduration=3.673564301 podStartE2EDuration="3.673564301s" podCreationTimestamp="2025-11-27 17:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:14:51.671107294 +0000 UTC m=+314.013933612" watchObservedRunningTime="2025-11-27 17:14:51.673564301 +0000 UTC m=+314.016390639" Nov 27 17:14:51 crc kubenswrapper[4792]: I1127 17:14:51.703745 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" podStartSLOduration=3.703731225 podStartE2EDuration="3.703731225s" podCreationTimestamp="2025-11-27 17:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:14:51.701697949 +0000 UTC m=+314.044524287" watchObservedRunningTime="2025-11-27 17:14:51.703731225 +0000 UTC m=+314.046557543" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.457732 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-s7cwd"] Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.458731 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.468984 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-s7cwd"] Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.603378 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8d1ca7a9-c94c-4715-bffc-4d7504500c52-registry-tls\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.603435 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8d1ca7a9-c94c-4715-bffc-4d7504500c52-registry-certificates\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.603514 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8d1ca7a9-c94c-4715-bffc-4d7504500c52-ca-trust-extracted\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.603559 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.603582 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d1ca7a9-c94c-4715-bffc-4d7504500c52-bound-sa-token\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.603615 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8d1ca7a9-c94c-4715-bffc-4d7504500c52-installation-pull-secrets\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.603656 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d1ca7a9-c94c-4715-bffc-4d7504500c52-trusted-ca\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.603677 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpdws\" (UniqueName: \"kubernetes.io/projected/8d1ca7a9-c94c-4715-bffc-4d7504500c52-kube-api-access-cpdws\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.640884 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.704805 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d1ca7a9-c94c-4715-bffc-4d7504500c52-trusted-ca\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.704876 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpdws\" (UniqueName: \"kubernetes.io/projected/8d1ca7a9-c94c-4715-bffc-4d7504500c52-kube-api-access-cpdws\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.704941 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8d1ca7a9-c94c-4715-bffc-4d7504500c52-registry-tls\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.704962 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8d1ca7a9-c94c-4715-bffc-4d7504500c52-registry-certificates\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.704999 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8d1ca7a9-c94c-4715-bffc-4d7504500c52-ca-trust-extracted\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.705044 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d1ca7a9-c94c-4715-bffc-4d7504500c52-bound-sa-token\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.705089 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8d1ca7a9-c94c-4715-bffc-4d7504500c52-installation-pull-secrets\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.706192 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d1ca7a9-c94c-4715-bffc-4d7504500c52-trusted-ca\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.706305 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8d1ca7a9-c94c-4715-bffc-4d7504500c52-registry-certificates\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.706353 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8d1ca7a9-c94c-4715-bffc-4d7504500c52-ca-trust-extracted\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.711437 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8d1ca7a9-c94c-4715-bffc-4d7504500c52-installation-pull-secrets\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.711446 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8d1ca7a9-c94c-4715-bffc-4d7504500c52-registry-tls\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.724594 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d1ca7a9-c94c-4715-bffc-4d7504500c52-bound-sa-token\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.726083 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpdws\" (UniqueName: \"kubernetes.io/projected/8d1ca7a9-c94c-4715-bffc-4d7504500c52-kube-api-access-cpdws\") pod \"image-registry-66df7c8f76-s7cwd\" (UID: \"8d1ca7a9-c94c-4715-bffc-4d7504500c52\") " pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:53 crc kubenswrapper[4792]: I1127 17:14:53.774209 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:54 crc kubenswrapper[4792]: I1127 17:14:54.026612 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-s7cwd"] Nov 27 17:14:54 crc kubenswrapper[4792]: W1127 17:14:54.032230 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d1ca7a9_c94c_4715_bffc_4d7504500c52.slice/crio-77f24b095be5811353885ab1654bdef1d9ebb21e08e7906f636309080c12b597 WatchSource:0}: Error finding container 77f24b095be5811353885ab1654bdef1d9ebb21e08e7906f636309080c12b597: Status 404 returned error can't find the container with id 77f24b095be5811353885ab1654bdef1d9ebb21e08e7906f636309080c12b597 Nov 27 17:14:54 crc kubenswrapper[4792]: I1127 17:14:54.298682 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 27 17:14:54 crc kubenswrapper[4792]: I1127 17:14:54.674479 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" event={"ID":"8d1ca7a9-c94c-4715-bffc-4d7504500c52","Type":"ContainerStarted","Data":"182128e47eae6adf39625fbbd4b5d5d6c514d577a270c9a004e5ecc92e859298"} Nov 27 17:14:54 crc kubenswrapper[4792]: I1127 17:14:54.674901 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" event={"ID":"8d1ca7a9-c94c-4715-bffc-4d7504500c52","Type":"ContainerStarted","Data":"77f24b095be5811353885ab1654bdef1d9ebb21e08e7906f636309080c12b597"} Nov 27 17:14:54 crc kubenswrapper[4792]: I1127 17:14:54.674922 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:14:54 crc kubenswrapper[4792]: I1127 17:14:54.694244 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" podStartSLOduration=1.694228922 podStartE2EDuration="1.694228922s" podCreationTimestamp="2025-11-27 17:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:14:54.693714448 +0000 UTC m=+317.036540766" watchObservedRunningTime="2025-11-27 17:14:54.694228922 +0000 UTC m=+317.037055240" Nov 27 17:14:55 crc kubenswrapper[4792]: I1127 17:14:55.791521 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk"] Nov 27 17:14:55 crc kubenswrapper[4792]: I1127 17:14:55.791738 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" podUID="8721ba87-b116-4a78-a6f3-1b7bd4011272" containerName="controller-manager" containerID="cri-o://192feab5f9982fd55d90c8a3d788c798851c5934c620a3d1f63de883e980840d" gracePeriod=30 Nov 27 17:14:55 crc kubenswrapper[4792]: I1127 17:14:55.795895 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc"] Nov 27 17:14:55 crc kubenswrapper[4792]: I1127 17:14:55.796178 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" podUID="454b5a49-614e-4dfc-bd31-0400a5776bb3" containerName="route-controller-manager" containerID="cri-o://a627825a60808b984a1ccd83f137d2140aec1bff4bbccf80c2560f91b72073f7" gracePeriod=30 Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.690697 4792 generic.go:334] "Generic (PLEG): container finished" podID="8721ba87-b116-4a78-a6f3-1b7bd4011272" containerID="192feab5f9982fd55d90c8a3d788c798851c5934c620a3d1f63de883e980840d" exitCode=0 Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.692906 4792 generic.go:334] "Generic (PLEG): container finished" podID="454b5a49-614e-4dfc-bd31-0400a5776bb3" containerID="a627825a60808b984a1ccd83f137d2140aec1bff4bbccf80c2560f91b72073f7" exitCode=0 Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.704672 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" event={"ID":"8721ba87-b116-4a78-a6f3-1b7bd4011272","Type":"ContainerDied","Data":"192feab5f9982fd55d90c8a3d788c798851c5934c620a3d1f63de883e980840d"} Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.704728 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" event={"ID":"454b5a49-614e-4dfc-bd31-0400a5776bb3","Type":"ContainerDied","Data":"a627825a60808b984a1ccd83f137d2140aec1bff4bbccf80c2560f91b72073f7"} Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.802887 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.808174 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.959936 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72gs4\" (UniqueName: \"kubernetes.io/projected/8721ba87-b116-4a78-a6f3-1b7bd4011272-kube-api-access-72gs4\") pod \"8721ba87-b116-4a78-a6f3-1b7bd4011272\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.960030 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8721ba87-b116-4a78-a6f3-1b7bd4011272-serving-cert\") pod \"8721ba87-b116-4a78-a6f3-1b7bd4011272\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.960109 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/454b5a49-614e-4dfc-bd31-0400a5776bb3-serving-cert\") pod \"454b5a49-614e-4dfc-bd31-0400a5776bb3\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.960184 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-proxy-ca-bundles\") pod \"8721ba87-b116-4a78-a6f3-1b7bd4011272\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.960234 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/454b5a49-614e-4dfc-bd31-0400a5776bb3-client-ca\") pod \"454b5a49-614e-4dfc-bd31-0400a5776bb3\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.960283 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-client-ca\") pod \"8721ba87-b116-4a78-a6f3-1b7bd4011272\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.961677 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-client-ca" (OuterVolumeSpecName: "client-ca") pod "8721ba87-b116-4a78-a6f3-1b7bd4011272" (UID: "8721ba87-b116-4a78-a6f3-1b7bd4011272"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.961870 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8721ba87-b116-4a78-a6f3-1b7bd4011272" (UID: "8721ba87-b116-4a78-a6f3-1b7bd4011272"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.962023 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454b5a49-614e-4dfc-bd31-0400a5776bb3-client-ca" (OuterVolumeSpecName: "client-ca") pod "454b5a49-614e-4dfc-bd31-0400a5776bb3" (UID: "454b5a49-614e-4dfc-bd31-0400a5776bb3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.962158 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptntt\" (UniqueName: \"kubernetes.io/projected/454b5a49-614e-4dfc-bd31-0400a5776bb3-kube-api-access-ptntt\") pod \"454b5a49-614e-4dfc-bd31-0400a5776bb3\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.962517 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/454b5a49-614e-4dfc-bd31-0400a5776bb3-config\") pod \"454b5a49-614e-4dfc-bd31-0400a5776bb3\" (UID: \"454b5a49-614e-4dfc-bd31-0400a5776bb3\") " Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.962597 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-config\") pod \"8721ba87-b116-4a78-a6f3-1b7bd4011272\" (UID: \"8721ba87-b116-4a78-a6f3-1b7bd4011272\") " Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.963116 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454b5a49-614e-4dfc-bd31-0400a5776bb3-config" (OuterVolumeSpecName: "config") pod "454b5a49-614e-4dfc-bd31-0400a5776bb3" (UID: "454b5a49-614e-4dfc-bd31-0400a5776bb3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.963204 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-config" (OuterVolumeSpecName: "config") pod "8721ba87-b116-4a78-a6f3-1b7bd4011272" (UID: "8721ba87-b116-4a78-a6f3-1b7bd4011272"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.963397 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.963417 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/454b5a49-614e-4dfc-bd31-0400a5776bb3-client-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.963426 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-client-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.963436 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/454b5a49-614e-4dfc-bd31-0400a5776bb3-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.963445 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8721ba87-b116-4a78-a6f3-1b7bd4011272-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.965886 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8721ba87-b116-4a78-a6f3-1b7bd4011272-kube-api-access-72gs4" (OuterVolumeSpecName: "kube-api-access-72gs4") pod "8721ba87-b116-4a78-a6f3-1b7bd4011272" (UID: "8721ba87-b116-4a78-a6f3-1b7bd4011272"). InnerVolumeSpecName "kube-api-access-72gs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.965892 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/454b5a49-614e-4dfc-bd31-0400a5776bb3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "454b5a49-614e-4dfc-bd31-0400a5776bb3" (UID: "454b5a49-614e-4dfc-bd31-0400a5776bb3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.966789 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/454b5a49-614e-4dfc-bd31-0400a5776bb3-kube-api-access-ptntt" (OuterVolumeSpecName: "kube-api-access-ptntt") pod "454b5a49-614e-4dfc-bd31-0400a5776bb3" (UID: "454b5a49-614e-4dfc-bd31-0400a5776bb3"). InnerVolumeSpecName "kube-api-access-ptntt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:14:56 crc kubenswrapper[4792]: I1127 17:14:56.967792 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8721ba87-b116-4a78-a6f3-1b7bd4011272-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8721ba87-b116-4a78-a6f3-1b7bd4011272" (UID: "8721ba87-b116-4a78-a6f3-1b7bd4011272"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.064692 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72gs4\" (UniqueName: \"kubernetes.io/projected/8721ba87-b116-4a78-a6f3-1b7bd4011272-kube-api-access-72gs4\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.064736 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8721ba87-b116-4a78-a6f3-1b7bd4011272-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.064749 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/454b5a49-614e-4dfc-bd31-0400a5776bb3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.064760 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptntt\" (UniqueName: \"kubernetes.io/projected/454b5a49-614e-4dfc-bd31-0400a5776bb3-kube-api-access-ptntt\") on node \"crc\" DevicePath \"\"" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.142914 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cbb956dd9-kspsd"] Nov 27 17:14:57 crc kubenswrapper[4792]: E1127 17:14:57.143196 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8721ba87-b116-4a78-a6f3-1b7bd4011272" containerName="controller-manager" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.143213 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8721ba87-b116-4a78-a6f3-1b7bd4011272" containerName="controller-manager" Nov 27 17:14:57 crc kubenswrapper[4792]: E1127 17:14:57.143228 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454b5a49-614e-4dfc-bd31-0400a5776bb3" containerName="route-controller-manager" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.143237 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="454b5a49-614e-4dfc-bd31-0400a5776bb3" containerName="route-controller-manager" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.143352 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8721ba87-b116-4a78-a6f3-1b7bd4011272" containerName="controller-manager" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.143372 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="454b5a49-614e-4dfc-bd31-0400a5776bb3" containerName="route-controller-manager" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.145444 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.160468 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg"] Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.161192 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.164158 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cbb956dd9-kspsd"] Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.167081 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg"] Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.268017 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-proxy-ca-bundles\") pod \"controller-manager-5cbb956dd9-kspsd\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.268101 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e02165-5596-481e-9a2d-b3d01896d05d-config\") pod \"route-controller-manager-6ddb95cdcf-z82rg\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.268175 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/924c27d9-6c0b-45ff-95a3-4b1944b74742-serving-cert\") pod \"controller-manager-5cbb956dd9-kspsd\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.268228 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e02165-5596-481e-9a2d-b3d01896d05d-serving-cert\") pod \"route-controller-manager-6ddb95cdcf-z82rg\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.268282 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wwmq\" (UniqueName: \"kubernetes.io/projected/d7e02165-5596-481e-9a2d-b3d01896d05d-kube-api-access-8wwmq\") pod \"route-controller-manager-6ddb95cdcf-z82rg\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.268331 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-client-ca\") pod \"controller-manager-5cbb956dd9-kspsd\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.268376 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc2zz\" (UniqueName: \"kubernetes.io/projected/924c27d9-6c0b-45ff-95a3-4b1944b74742-kube-api-access-mc2zz\") pod \"controller-manager-5cbb956dd9-kspsd\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.268428 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7e02165-5596-481e-9a2d-b3d01896d05d-client-ca\") pod \"route-controller-manager-6ddb95cdcf-z82rg\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.268474 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-config\") pod \"controller-manager-5cbb956dd9-kspsd\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.369753 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e02165-5596-481e-9a2d-b3d01896d05d-serving-cert\") pod \"route-controller-manager-6ddb95cdcf-z82rg\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.369833 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wwmq\" (UniqueName: \"kubernetes.io/projected/d7e02165-5596-481e-9a2d-b3d01896d05d-kube-api-access-8wwmq\") pod \"route-controller-manager-6ddb95cdcf-z82rg\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.369888 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-client-ca\") pod \"controller-manager-5cbb956dd9-kspsd\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.369932 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc2zz\" (UniqueName: \"kubernetes.io/projected/924c27d9-6c0b-45ff-95a3-4b1944b74742-kube-api-access-mc2zz\") pod \"controller-manager-5cbb956dd9-kspsd\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.369981 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7e02165-5596-481e-9a2d-b3d01896d05d-client-ca\") pod \"route-controller-manager-6ddb95cdcf-z82rg\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.370028 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-config\") pod \"controller-manager-5cbb956dd9-kspsd\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.370099 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-proxy-ca-bundles\") pod \"controller-manager-5cbb956dd9-kspsd\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.370138 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e02165-5596-481e-9a2d-b3d01896d05d-config\") pod \"route-controller-manager-6ddb95cdcf-z82rg\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.370175 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/924c27d9-6c0b-45ff-95a3-4b1944b74742-serving-cert\") pod \"controller-manager-5cbb956dd9-kspsd\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.372059 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-client-ca\") pod \"controller-manager-5cbb956dd9-kspsd\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.372123 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-proxy-ca-bundles\") pod \"controller-manager-5cbb956dd9-kspsd\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.373361 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7e02165-5596-481e-9a2d-b3d01896d05d-client-ca\") pod \"route-controller-manager-6ddb95cdcf-z82rg\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.373542 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-config\") pod \"controller-manager-5cbb956dd9-kspsd\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.374177 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e02165-5596-481e-9a2d-b3d01896d05d-config\") pod \"route-controller-manager-6ddb95cdcf-z82rg\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.378232 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e02165-5596-481e-9a2d-b3d01896d05d-serving-cert\") pod \"route-controller-manager-6ddb95cdcf-z82rg\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.378330 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/924c27d9-6c0b-45ff-95a3-4b1944b74742-serving-cert\") pod \"controller-manager-5cbb956dd9-kspsd\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.394012 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wwmq\" (UniqueName: \"kubernetes.io/projected/d7e02165-5596-481e-9a2d-b3d01896d05d-kube-api-access-8wwmq\") pod \"route-controller-manager-6ddb95cdcf-z82rg\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.395121 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc2zz\" (UniqueName: \"kubernetes.io/projected/924c27d9-6c0b-45ff-95a3-4b1944b74742-kube-api-access-mc2zz\") pod \"controller-manager-5cbb956dd9-kspsd\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.509818 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.517257 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.712309 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" event={"ID":"8721ba87-b116-4a78-a6f3-1b7bd4011272","Type":"ContainerDied","Data":"ec3eac6b25c13d1e9fec915c0b31482ef038a07b5cae470677f077f4ca5ed03a"} Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.712783 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.713144 4792 scope.go:117] "RemoveContainer" containerID="192feab5f9982fd55d90c8a3d788c798851c5934c620a3d1f63de883e980840d" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.716899 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.719847 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc" event={"ID":"454b5a49-614e-4dfc-bd31-0400a5776bb3","Type":"ContainerDied","Data":"3bd702cf0a680b903d36c766b8a63e20a144634c8c7bb80e23391466b4afbead"} Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.741837 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cbb956dd9-kspsd"] Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.749905 4792 scope.go:117] "RemoveContainer" containerID="a627825a60808b984a1ccd83f137d2140aec1bff4bbccf80c2560f91b72073f7" Nov 27 17:14:57 crc kubenswrapper[4792]: W1127 17:14:57.752668 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod924c27d9_6c0b_45ff_95a3_4b1944b74742.slice/crio-df7e9026fedad89ecfd48c835da75b21e5d31726a701ebc55d3df99a0d89d434 WatchSource:0}: Error finding container df7e9026fedad89ecfd48c835da75b21e5d31726a701ebc55d3df99a0d89d434: Status 404 returned error can't find the container with id df7e9026fedad89ecfd48c835da75b21e5d31726a701ebc55d3df99a0d89d434 Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.753453 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk"] Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.762437 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7dfc5f568d-jjmnk"] Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.765824 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc"] Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.770893 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77d7f7ff5b-2szwc"] Nov 27 17:14:57 crc kubenswrapper[4792]: I1127 17:14:57.774266 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg"] Nov 27 17:14:57 crc kubenswrapper[4792]: W1127 17:14:57.779438 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7e02165_5596_481e_9a2d_b3d01896d05d.slice/crio-287a8fd1029d064a0b57cdd09efcfb242d7c0e521e49ae30f106ada1c350c8e3 WatchSource:0}: Error finding container 287a8fd1029d064a0b57cdd09efcfb242d7c0e521e49ae30f106ada1c350c8e3: Status 404 returned error can't find the container with id 287a8fd1029d064a0b57cdd09efcfb242d7c0e521e49ae30f106ada1c350c8e3 Nov 27 17:14:58 crc kubenswrapper[4792]: I1127 17:14:58.702211 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="454b5a49-614e-4dfc-bd31-0400a5776bb3" path="/var/lib/kubelet/pods/454b5a49-614e-4dfc-bd31-0400a5776bb3/volumes" Nov 27 17:14:58 crc kubenswrapper[4792]: I1127 17:14:58.703200 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8721ba87-b116-4a78-a6f3-1b7bd4011272" path="/var/lib/kubelet/pods/8721ba87-b116-4a78-a6f3-1b7bd4011272/volumes" Nov 27 17:14:58 crc kubenswrapper[4792]: I1127 17:14:58.723834 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" event={"ID":"924c27d9-6c0b-45ff-95a3-4b1944b74742","Type":"ContainerStarted","Data":"ba34447bc072fb35cdc68f7c2b92f1b1ccf7b56d6c66fff8c02851c2b398f3a4"} Nov 27 17:14:58 crc kubenswrapper[4792]: I1127 17:14:58.723889 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" event={"ID":"924c27d9-6c0b-45ff-95a3-4b1944b74742","Type":"ContainerStarted","Data":"df7e9026fedad89ecfd48c835da75b21e5d31726a701ebc55d3df99a0d89d434"} Nov 27 17:14:58 crc kubenswrapper[4792]: I1127 17:14:58.724630 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:58 crc kubenswrapper[4792]: I1127 17:14:58.725815 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" event={"ID":"d7e02165-5596-481e-9a2d-b3d01896d05d","Type":"ContainerStarted","Data":"9e211c81a5b405de6e4173dcc087c7ef38e7a028d9bcceed999914d6eb305026"} Nov 27 17:14:58 crc kubenswrapper[4792]: I1127 17:14:58.725852 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" event={"ID":"d7e02165-5596-481e-9a2d-b3d01896d05d","Type":"ContainerStarted","Data":"287a8fd1029d064a0b57cdd09efcfb242d7c0e521e49ae30f106ada1c350c8e3"} Nov 27 17:14:58 crc kubenswrapper[4792]: I1127 17:14:58.726090 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:58 crc kubenswrapper[4792]: I1127 17:14:58.730697 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:14:58 crc kubenswrapper[4792]: I1127 17:14:58.733033 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:14:58 crc kubenswrapper[4792]: I1127 17:14:58.748868 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" podStartSLOduration=3.748850728 podStartE2EDuration="3.748850728s" podCreationTimestamp="2025-11-27 17:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:14:58.744879539 +0000 UTC m=+321.087705877" watchObservedRunningTime="2025-11-27 17:14:58.748850728 +0000 UTC m=+321.091677046" Nov 27 17:14:58 crc kubenswrapper[4792]: I1127 17:14:58.760115 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" podStartSLOduration=3.760099855 podStartE2EDuration="3.760099855s" podCreationTimestamp="2025-11-27 17:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:14:58.75991883 +0000 UTC m=+321.102745148" watchObservedRunningTime="2025-11-27 17:14:58.760099855 +0000 UTC m=+321.102926173" Nov 27 17:14:59 crc kubenswrapper[4792]: I1127 17:14:59.396556 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-xmhbm\" (UID: \"7a6db3f4-940f-4d6f-892d-d6f5003bd881\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" Nov 27 17:14:59 crc kubenswrapper[4792]: E1127 17:14:59.396802 4792 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Nov 27 17:14:59 crc kubenswrapper[4792]: E1127 17:14:59.397064 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates podName:7a6db3f4-940f-4d6f-892d-d6f5003bd881 nodeName:}" failed. No retries permitted until 2025-11-27 17:15:15.397047453 +0000 UTC m=+337.739873771 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-xmhbm" (UID: "7a6db3f4-940f-4d6f-892d-d6f5003bd881") : secret "prometheus-operator-admission-webhook-tls" not found Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.172658 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6"] Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.174279 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.176153 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.177125 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.182069 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6"] Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.209812 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trrl8\" (UniqueName: \"kubernetes.io/projected/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-kube-api-access-trrl8\") pod \"collect-profiles-29404395-p9nk6\" (UID: \"f0e9ee27-f9b2-47ec-bf45-3b97be28e298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.209946 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-secret-volume\") pod \"collect-profiles-29404395-p9nk6\" (UID: \"f0e9ee27-f9b2-47ec-bf45-3b97be28e298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.210059 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-config-volume\") pod \"collect-profiles-29404395-p9nk6\" (UID: \"f0e9ee27-f9b2-47ec-bf45-3b97be28e298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.311088 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-config-volume\") pod \"collect-profiles-29404395-p9nk6\" (UID: \"f0e9ee27-f9b2-47ec-bf45-3b97be28e298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.311160 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trrl8\" (UniqueName: \"kubernetes.io/projected/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-kube-api-access-trrl8\") pod \"collect-profiles-29404395-p9nk6\" (UID: \"f0e9ee27-f9b2-47ec-bf45-3b97be28e298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.311212 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-secret-volume\") pod \"collect-profiles-29404395-p9nk6\" (UID: \"f0e9ee27-f9b2-47ec-bf45-3b97be28e298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.312208 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-config-volume\") pod \"collect-profiles-29404395-p9nk6\" (UID: \"f0e9ee27-f9b2-47ec-bf45-3b97be28e298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.317973 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-secret-volume\") pod \"collect-profiles-29404395-p9nk6\" (UID: \"f0e9ee27-f9b2-47ec-bf45-3b97be28e298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.330552 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trrl8\" (UniqueName: \"kubernetes.io/projected/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-kube-api-access-trrl8\") pod \"collect-profiles-29404395-p9nk6\" (UID: \"f0e9ee27-f9b2-47ec-bf45-3b97be28e298\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.490562 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" Nov 27 17:15:00 crc kubenswrapper[4792]: I1127 17:15:00.938234 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6"] Nov 27 17:15:00 crc kubenswrapper[4792]: W1127 17:15:00.947842 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0e9ee27_f9b2_47ec_bf45_3b97be28e298.slice/crio-1c717e7085e8dd81ff4b3981137bb40eeaed203bb6d3c81a6b9985d794aead1f WatchSource:0}: Error finding container 1c717e7085e8dd81ff4b3981137bb40eeaed203bb6d3c81a6b9985d794aead1f: Status 404 returned error can't find the container with id 1c717e7085e8dd81ff4b3981137bb40eeaed203bb6d3c81a6b9985d794aead1f Nov 27 17:15:01 crc kubenswrapper[4792]: I1127 17:15:01.748525 4792 generic.go:334] "Generic (PLEG): container finished" podID="f0e9ee27-f9b2-47ec-bf45-3b97be28e298" containerID="ef5051df5fafe3a49c4660f1170b2f981887674de1c9095772d474c7aff08193" exitCode=0 Nov 27 17:15:01 crc kubenswrapper[4792]: I1127 17:15:01.748601 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" event={"ID":"f0e9ee27-f9b2-47ec-bf45-3b97be28e298","Type":"ContainerDied","Data":"ef5051df5fafe3a49c4660f1170b2f981887674de1c9095772d474c7aff08193"} Nov 27 17:15:01 crc kubenswrapper[4792]: I1127 17:15:01.748880 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" event={"ID":"f0e9ee27-f9b2-47ec-bf45-3b97be28e298","Type":"ContainerStarted","Data":"1c717e7085e8dd81ff4b3981137bb40eeaed203bb6d3c81a6b9985d794aead1f"} Nov 27 17:15:03 crc kubenswrapper[4792]: I1127 17:15:03.127745 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" Nov 27 17:15:03 crc kubenswrapper[4792]: I1127 17:15:03.250159 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trrl8\" (UniqueName: \"kubernetes.io/projected/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-kube-api-access-trrl8\") pod \"f0e9ee27-f9b2-47ec-bf45-3b97be28e298\" (UID: \"f0e9ee27-f9b2-47ec-bf45-3b97be28e298\") " Nov 27 17:15:03 crc kubenswrapper[4792]: I1127 17:15:03.250227 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-config-volume\") pod \"f0e9ee27-f9b2-47ec-bf45-3b97be28e298\" (UID: \"f0e9ee27-f9b2-47ec-bf45-3b97be28e298\") " Nov 27 17:15:03 crc kubenswrapper[4792]: I1127 17:15:03.250268 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-secret-volume\") pod \"f0e9ee27-f9b2-47ec-bf45-3b97be28e298\" (UID: \"f0e9ee27-f9b2-47ec-bf45-3b97be28e298\") " Nov 27 17:15:03 crc kubenswrapper[4792]: I1127 17:15:03.251037 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-config-volume" (OuterVolumeSpecName: "config-volume") pod "f0e9ee27-f9b2-47ec-bf45-3b97be28e298" (UID: "f0e9ee27-f9b2-47ec-bf45-3b97be28e298"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:15:03 crc kubenswrapper[4792]: I1127 17:15:03.256818 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f0e9ee27-f9b2-47ec-bf45-3b97be28e298" (UID: "f0e9ee27-f9b2-47ec-bf45-3b97be28e298"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:15:03 crc kubenswrapper[4792]: I1127 17:15:03.256857 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-kube-api-access-trrl8" (OuterVolumeSpecName: "kube-api-access-trrl8") pod "f0e9ee27-f9b2-47ec-bf45-3b97be28e298" (UID: "f0e9ee27-f9b2-47ec-bf45-3b97be28e298"). InnerVolumeSpecName "kube-api-access-trrl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:15:03 crc kubenswrapper[4792]: I1127 17:15:03.352366 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trrl8\" (UniqueName: \"kubernetes.io/projected/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-kube-api-access-trrl8\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:03 crc kubenswrapper[4792]: I1127 17:15:03.352410 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-config-volume\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:03 crc kubenswrapper[4792]: I1127 17:15:03.352421 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f0e9ee27-f9b2-47ec-bf45-3b97be28e298-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:03 crc kubenswrapper[4792]: I1127 17:15:03.767048 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" event={"ID":"f0e9ee27-f9b2-47ec-bf45-3b97be28e298","Type":"ContainerDied","Data":"1c717e7085e8dd81ff4b3981137bb40eeaed203bb6d3c81a6b9985d794aead1f"} Nov 27 17:15:03 crc kubenswrapper[4792]: I1127 17:15:03.767352 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c717e7085e8dd81ff4b3981137bb40eeaed203bb6d3c81a6b9985d794aead1f" Nov 27 17:15:03 crc kubenswrapper[4792]: I1127 17:15:03.767138 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6" Nov 27 17:15:06 crc kubenswrapper[4792]: I1127 17:15:06.377265 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 27 17:15:09 crc kubenswrapper[4792]: I1127 17:15:09.523837 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cbb956dd9-kspsd"] Nov 27 17:15:09 crc kubenswrapper[4792]: I1127 17:15:09.524096 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" podUID="924c27d9-6c0b-45ff-95a3-4b1944b74742" containerName="controller-manager" containerID="cri-o://ba34447bc072fb35cdc68f7c2b92f1b1ccf7b56d6c66fff8c02851c2b398f3a4" gracePeriod=30 Nov 27 17:15:09 crc kubenswrapper[4792]: I1127 17:15:09.541106 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg"] Nov 27 17:15:09 crc kubenswrapper[4792]: I1127 17:15:09.541349 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" podUID="d7e02165-5596-481e-9a2d-b3d01896d05d" containerName="route-controller-manager" containerID="cri-o://9e211c81a5b405de6e4173dcc087c7ef38e7a028d9bcceed999914d6eb305026" gracePeriod=30 Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.554550 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.590081 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt"] Nov 27 17:15:10 crc kubenswrapper[4792]: E1127 17:15:10.590294 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e02165-5596-481e-9a2d-b3d01896d05d" containerName="route-controller-manager" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.590305 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e02165-5596-481e-9a2d-b3d01896d05d" containerName="route-controller-manager" Nov 27 17:15:10 crc kubenswrapper[4792]: E1127 17:15:10.590324 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e9ee27-f9b2-47ec-bf45-3b97be28e298" containerName="collect-profiles" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.590330 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e9ee27-f9b2-47ec-bf45-3b97be28e298" containerName="collect-profiles" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.590433 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e9ee27-f9b2-47ec-bf45-3b97be28e298" containerName="collect-profiles" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.590464 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e02165-5596-481e-9a2d-b3d01896d05d" containerName="route-controller-manager" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.590848 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.646449 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt"] Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.728481 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.756435 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-config\") pod \"924c27d9-6c0b-45ff-95a3-4b1944b74742\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.756475 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7e02165-5596-481e-9a2d-b3d01896d05d-client-ca\") pod \"d7e02165-5596-481e-9a2d-b3d01896d05d\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.756499 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e02165-5596-481e-9a2d-b3d01896d05d-config\") pod \"d7e02165-5596-481e-9a2d-b3d01896d05d\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.756538 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wwmq\" (UniqueName: \"kubernetes.io/projected/d7e02165-5596-481e-9a2d-b3d01896d05d-kube-api-access-8wwmq\") pod \"d7e02165-5596-481e-9a2d-b3d01896d05d\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.756563 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/924c27d9-6c0b-45ff-95a3-4b1944b74742-serving-cert\") pod \"924c27d9-6c0b-45ff-95a3-4b1944b74742\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.756735 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc2zz\" (UniqueName: \"kubernetes.io/projected/924c27d9-6c0b-45ff-95a3-4b1944b74742-kube-api-access-mc2zz\") pod \"924c27d9-6c0b-45ff-95a3-4b1944b74742\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.756773 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-client-ca\") pod \"924c27d9-6c0b-45ff-95a3-4b1944b74742\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.756815 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-proxy-ca-bundles\") pod \"924c27d9-6c0b-45ff-95a3-4b1944b74742\" (UID: \"924c27d9-6c0b-45ff-95a3-4b1944b74742\") " Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.756839 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e02165-5596-481e-9a2d-b3d01896d05d-serving-cert\") pod \"d7e02165-5596-481e-9a2d-b3d01896d05d\" (UID: \"d7e02165-5596-481e-9a2d-b3d01896d05d\") " Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.756972 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlrrb\" (UniqueName: \"kubernetes.io/projected/c655908e-c924-46d6-bc32-5ff1a6f5229f-kube-api-access-mlrrb\") pod \"route-controller-manager-96f6598df-hlzlt\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.757005 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c655908e-c924-46d6-bc32-5ff1a6f5229f-config\") pod \"route-controller-manager-96f6598df-hlzlt\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.757051 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c655908e-c924-46d6-bc32-5ff1a6f5229f-client-ca\") pod \"route-controller-manager-96f6598df-hlzlt\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.757083 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c655908e-c924-46d6-bc32-5ff1a6f5229f-serving-cert\") pod \"route-controller-manager-96f6598df-hlzlt\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.757356 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-config" (OuterVolumeSpecName: "config") pod "924c27d9-6c0b-45ff-95a3-4b1944b74742" (UID: "924c27d9-6c0b-45ff-95a3-4b1944b74742"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.757877 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-client-ca" (OuterVolumeSpecName: "client-ca") pod "924c27d9-6c0b-45ff-95a3-4b1944b74742" (UID: "924c27d9-6c0b-45ff-95a3-4b1944b74742"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.757880 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e02165-5596-481e-9a2d-b3d01896d05d-client-ca" (OuterVolumeSpecName: "client-ca") pod "d7e02165-5596-481e-9a2d-b3d01896d05d" (UID: "d7e02165-5596-481e-9a2d-b3d01896d05d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.758360 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "924c27d9-6c0b-45ff-95a3-4b1944b74742" (UID: "924c27d9-6c0b-45ff-95a3-4b1944b74742"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.758360 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e02165-5596-481e-9a2d-b3d01896d05d-config" (OuterVolumeSpecName: "config") pod "d7e02165-5596-481e-9a2d-b3d01896d05d" (UID: "d7e02165-5596-481e-9a2d-b3d01896d05d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.763517 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e02165-5596-481e-9a2d-b3d01896d05d-kube-api-access-8wwmq" (OuterVolumeSpecName: "kube-api-access-8wwmq") pod "d7e02165-5596-481e-9a2d-b3d01896d05d" (UID: "d7e02165-5596-481e-9a2d-b3d01896d05d"). InnerVolumeSpecName "kube-api-access-8wwmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.763598 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/924c27d9-6c0b-45ff-95a3-4b1944b74742-kube-api-access-mc2zz" (OuterVolumeSpecName: "kube-api-access-mc2zz") pod "924c27d9-6c0b-45ff-95a3-4b1944b74742" (UID: "924c27d9-6c0b-45ff-95a3-4b1944b74742"). InnerVolumeSpecName "kube-api-access-mc2zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.769679 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/924c27d9-6c0b-45ff-95a3-4b1944b74742-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "924c27d9-6c0b-45ff-95a3-4b1944b74742" (UID: "924c27d9-6c0b-45ff-95a3-4b1944b74742"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.771944 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e02165-5596-481e-9a2d-b3d01896d05d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e02165-5596-481e-9a2d-b3d01896d05d" (UID: "d7e02165-5596-481e-9a2d-b3d01896d05d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.805120 4792 generic.go:334] "Generic (PLEG): container finished" podID="d7e02165-5596-481e-9a2d-b3d01896d05d" containerID="9e211c81a5b405de6e4173dcc087c7ef38e7a028d9bcceed999914d6eb305026" exitCode=0 Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.805300 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" event={"ID":"d7e02165-5596-481e-9a2d-b3d01896d05d","Type":"ContainerDied","Data":"9e211c81a5b405de6e4173dcc087c7ef38e7a028d9bcceed999914d6eb305026"} Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.805458 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" event={"ID":"d7e02165-5596-481e-9a2d-b3d01896d05d","Type":"ContainerDied","Data":"287a8fd1029d064a0b57cdd09efcfb242d7c0e521e49ae30f106ada1c350c8e3"} Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.805534 4792 scope.go:117] "RemoveContainer" containerID="9e211c81a5b405de6e4173dcc087c7ef38e7a028d9bcceed999914d6eb305026" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.805378 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.809681 4792 generic.go:334] "Generic (PLEG): container finished" podID="924c27d9-6c0b-45ff-95a3-4b1944b74742" containerID="ba34447bc072fb35cdc68f7c2b92f1b1ccf7b56d6c66fff8c02851c2b398f3a4" exitCode=0 Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.809752 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.809794 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" event={"ID":"924c27d9-6c0b-45ff-95a3-4b1944b74742","Type":"ContainerDied","Data":"ba34447bc072fb35cdc68f7c2b92f1b1ccf7b56d6c66fff8c02851c2b398f3a4"} Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.813731 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cbb956dd9-kspsd" event={"ID":"924c27d9-6c0b-45ff-95a3-4b1944b74742","Type":"ContainerDied","Data":"df7e9026fedad89ecfd48c835da75b21e5d31726a701ebc55d3df99a0d89d434"} Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.825962 4792 scope.go:117] "RemoveContainer" containerID="9e211c81a5b405de6e4173dcc087c7ef38e7a028d9bcceed999914d6eb305026" Nov 27 17:15:10 crc kubenswrapper[4792]: E1127 17:15:10.826564 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e211c81a5b405de6e4173dcc087c7ef38e7a028d9bcceed999914d6eb305026\": container with ID starting with 9e211c81a5b405de6e4173dcc087c7ef38e7a028d9bcceed999914d6eb305026 not found: ID does not exist" containerID="9e211c81a5b405de6e4173dcc087c7ef38e7a028d9bcceed999914d6eb305026" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.826600 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e211c81a5b405de6e4173dcc087c7ef38e7a028d9bcceed999914d6eb305026"} err="failed to get container status \"9e211c81a5b405de6e4173dcc087c7ef38e7a028d9bcceed999914d6eb305026\": rpc error: code = NotFound desc = could not find container \"9e211c81a5b405de6e4173dcc087c7ef38e7a028d9bcceed999914d6eb305026\": container with ID starting with 9e211c81a5b405de6e4173dcc087c7ef38e7a028d9bcceed999914d6eb305026 not found: ID does not exist" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.826627 4792 scope.go:117] "RemoveContainer" containerID="ba34447bc072fb35cdc68f7c2b92f1b1ccf7b56d6c66fff8c02851c2b398f3a4" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.842439 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg"] Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.844374 4792 scope.go:117] "RemoveContainer" containerID="ba34447bc072fb35cdc68f7c2b92f1b1ccf7b56d6c66fff8c02851c2b398f3a4" Nov 27 17:15:10 crc kubenswrapper[4792]: E1127 17:15:10.844928 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba34447bc072fb35cdc68f7c2b92f1b1ccf7b56d6c66fff8c02851c2b398f3a4\": container with ID starting with ba34447bc072fb35cdc68f7c2b92f1b1ccf7b56d6c66fff8c02851c2b398f3a4 not found: ID does not exist" containerID="ba34447bc072fb35cdc68f7c2b92f1b1ccf7b56d6c66fff8c02851c2b398f3a4" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.844968 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba34447bc072fb35cdc68f7c2b92f1b1ccf7b56d6c66fff8c02851c2b398f3a4"} err="failed to get container status \"ba34447bc072fb35cdc68f7c2b92f1b1ccf7b56d6c66fff8c02851c2b398f3a4\": rpc error: code = NotFound desc = could not find container \"ba34447bc072fb35cdc68f7c2b92f1b1ccf7b56d6c66fff8c02851c2b398f3a4\": container with ID starting with ba34447bc072fb35cdc68f7c2b92f1b1ccf7b56d6c66fff8c02851c2b398f3a4 not found: ID does not exist" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.846352 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-z82rg"] Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.852340 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cbb956dd9-kspsd"] Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.857555 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5cbb956dd9-kspsd"] Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.858246 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlrrb\" (UniqueName: \"kubernetes.io/projected/c655908e-c924-46d6-bc32-5ff1a6f5229f-kube-api-access-mlrrb\") pod \"route-controller-manager-96f6598df-hlzlt\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.858465 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c655908e-c924-46d6-bc32-5ff1a6f5229f-config\") pod \"route-controller-manager-96f6598df-hlzlt\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.858634 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c655908e-c924-46d6-bc32-5ff1a6f5229f-client-ca\") pod \"route-controller-manager-96f6598df-hlzlt\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.858997 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c655908e-c924-46d6-bc32-5ff1a6f5229f-serving-cert\") pod \"route-controller-manager-96f6598df-hlzlt\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.859205 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e02165-5596-481e-9a2d-b3d01896d05d-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.859306 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wwmq\" (UniqueName: \"kubernetes.io/projected/d7e02165-5596-481e-9a2d-b3d01896d05d-kube-api-access-8wwmq\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.859411 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/924c27d9-6c0b-45ff-95a3-4b1944b74742-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.859531 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc2zz\" (UniqueName: \"kubernetes.io/projected/924c27d9-6c0b-45ff-95a3-4b1944b74742-kube-api-access-mc2zz\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.859618 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c655908e-c924-46d6-bc32-5ff1a6f5229f-client-ca\") pod \"route-controller-manager-96f6598df-hlzlt\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.859636 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-client-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.859716 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.859730 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e02165-5596-481e-9a2d-b3d01896d05d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.859743 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/924c27d9-6c0b-45ff-95a3-4b1944b74742-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.859755 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7e02165-5596-481e-9a2d-b3d01896d05d-client-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.860638 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c655908e-c924-46d6-bc32-5ff1a6f5229f-config\") pod \"route-controller-manager-96f6598df-hlzlt\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.862574 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c655908e-c924-46d6-bc32-5ff1a6f5229f-serving-cert\") pod \"route-controller-manager-96f6598df-hlzlt\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.875928 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlrrb\" (UniqueName: \"kubernetes.io/projected/c655908e-c924-46d6-bc32-5ff1a6f5229f-kube-api-access-mlrrb\") pod \"route-controller-manager-96f6598df-hlzlt\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:10 crc kubenswrapper[4792]: I1127 17:15:10.904637 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:11 crc kubenswrapper[4792]: I1127 17:15:11.288956 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt"] Nov 27 17:15:11 crc kubenswrapper[4792]: W1127 17:15:11.294334 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc655908e_c924_46d6_bc32_5ff1a6f5229f.slice/crio-354732deb7abcb26a31722e4804186add406d044b5eb44dcf5ced239e1e7ed25 WatchSource:0}: Error finding container 354732deb7abcb26a31722e4804186add406d044b5eb44dcf5ced239e1e7ed25: Status 404 returned error can't find the container with id 354732deb7abcb26a31722e4804186add406d044b5eb44dcf5ced239e1e7ed25 Nov 27 17:15:11 crc kubenswrapper[4792]: I1127 17:15:11.819489 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" event={"ID":"c655908e-c924-46d6-bc32-5ff1a6f5229f","Type":"ContainerStarted","Data":"559e78b079961c86ec1801633d8fad479780fd7f46ee9e1871c4b77a10f37d8b"} Nov 27 17:15:11 crc kubenswrapper[4792]: I1127 17:15:11.819873 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:11 crc kubenswrapper[4792]: I1127 17:15:11.819896 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" event={"ID":"c655908e-c924-46d6-bc32-5ff1a6f5229f","Type":"ContainerStarted","Data":"354732deb7abcb26a31722e4804186add406d044b5eb44dcf5ced239e1e7ed25"} Nov 27 17:15:11 crc kubenswrapper[4792]: I1127 17:15:11.841605 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" podStartSLOduration=2.8415691130000003 podStartE2EDuration="2.841569113s" podCreationTimestamp="2025-11-27 17:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:15:11.841525232 +0000 UTC m=+334.184351560" watchObservedRunningTime="2025-11-27 17:15:11.841569113 +0000 UTC m=+334.184395431" Nov 27 17:15:12 crc kubenswrapper[4792]: I1127 17:15:12.020120 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:12 crc kubenswrapper[4792]: I1127 17:15:12.696194 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="924c27d9-6c0b-45ff-95a3-4b1944b74742" path="/var/lib/kubelet/pods/924c27d9-6c0b-45ff-95a3-4b1944b74742/volumes" Nov 27 17:15:12 crc kubenswrapper[4792]: I1127 17:15:12.697359 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e02165-5596-481e-9a2d-b3d01896d05d" path="/var/lib/kubelet/pods/d7e02165-5596-481e-9a2d-b3d01896d05d/volumes" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.154179 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75795fb4fc-4dxjq"] Nov 27 17:15:13 crc kubenswrapper[4792]: E1127 17:15:13.154401 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924c27d9-6c0b-45ff-95a3-4b1944b74742" containerName="controller-manager" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.154413 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="924c27d9-6c0b-45ff-95a3-4b1944b74742" containerName="controller-manager" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.154500 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="924c27d9-6c0b-45ff-95a3-4b1944b74742" containerName="controller-manager" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.154885 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.160270 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.164177 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.164251 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.164312 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.164338 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.164613 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.170890 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75795fb4fc-4dxjq"] Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.171776 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.191559 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-client-ca\") pod \"controller-manager-75795fb4fc-4dxjq\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.191738 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x54dj\" (UniqueName: \"kubernetes.io/projected/b092c25d-5169-473c-8240-4b47afbf609a-kube-api-access-x54dj\") pod \"controller-manager-75795fb4fc-4dxjq\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.191860 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-config\") pod \"controller-manager-75795fb4fc-4dxjq\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.191942 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-proxy-ca-bundles\") pod \"controller-manager-75795fb4fc-4dxjq\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.192222 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b092c25d-5169-473c-8240-4b47afbf609a-serving-cert\") pod \"controller-manager-75795fb4fc-4dxjq\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.293953 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-config\") pod \"controller-manager-75795fb4fc-4dxjq\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.294043 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-proxy-ca-bundles\") pod \"controller-manager-75795fb4fc-4dxjq\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.294105 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b092c25d-5169-473c-8240-4b47afbf609a-serving-cert\") pod \"controller-manager-75795fb4fc-4dxjq\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.294147 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-client-ca\") pod \"controller-manager-75795fb4fc-4dxjq\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.294225 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x54dj\" (UniqueName: \"kubernetes.io/projected/b092c25d-5169-473c-8240-4b47afbf609a-kube-api-access-x54dj\") pod \"controller-manager-75795fb4fc-4dxjq\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.295685 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-proxy-ca-bundles\") pod \"controller-manager-75795fb4fc-4dxjq\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.296026 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-client-ca\") pod \"controller-manager-75795fb4fc-4dxjq\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.297091 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-config\") pod \"controller-manager-75795fb4fc-4dxjq\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.301940 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b092c25d-5169-473c-8240-4b47afbf609a-serving-cert\") pod \"controller-manager-75795fb4fc-4dxjq\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.318051 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x54dj\" (UniqueName: \"kubernetes.io/projected/b092c25d-5169-473c-8240-4b47afbf609a-kube-api-access-x54dj\") pod \"controller-manager-75795fb4fc-4dxjq\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.496126 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.782897 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-s7cwd" Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.833294 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ft9s4"] Nov 27 17:15:13 crc kubenswrapper[4792]: I1127 17:15:13.969005 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75795fb4fc-4dxjq"] Nov 27 17:15:14 crc kubenswrapper[4792]: I1127 17:15:14.846487 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" event={"ID":"b092c25d-5169-473c-8240-4b47afbf609a","Type":"ContainerStarted","Data":"28c75967833b8fc1437a7f7aac61892df2c7613d128c079793568e306d443dbe"} Nov 27 17:15:14 crc kubenswrapper[4792]: I1127 17:15:14.846927 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" event={"ID":"b092c25d-5169-473c-8240-4b47afbf609a","Type":"ContainerStarted","Data":"af2e39bd46506358c04f5a8bae9a1702714c97ca076290b44b43fc51c63a4be0"} Nov 27 17:15:14 crc kubenswrapper[4792]: I1127 17:15:14.846941 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:14 crc kubenswrapper[4792]: I1127 17:15:14.853165 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:14 crc kubenswrapper[4792]: I1127 17:15:14.878734 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" podStartSLOduration=5.878712904 podStartE2EDuration="5.878712904s" podCreationTimestamp="2025-11-27 17:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:15:14.865412448 +0000 UTC m=+337.208238766" watchObservedRunningTime="2025-11-27 17:15:14.878712904 +0000 UTC m=+337.221539222" Nov 27 17:15:15 crc kubenswrapper[4792]: I1127 17:15:15.430580 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-xmhbm\" (UID: \"7a6db3f4-940f-4d6f-892d-d6f5003bd881\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" Nov 27 17:15:15 crc kubenswrapper[4792]: E1127 17:15:15.430740 4792 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Nov 27 17:15:15 crc kubenswrapper[4792]: E1127 17:15:15.431060 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates podName:7a6db3f4-940f-4d6f-892d-d6f5003bd881 nodeName:}" failed. No retries permitted until 2025-11-27 17:15:47.431043434 +0000 UTC m=+369.773869742 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-xmhbm" (UID: "7a6db3f4-940f-4d6f-892d-d6f5003bd881") : secret "prometheus-operator-admission-webhook-tls" not found Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.004732 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt"] Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.004968 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" podUID="c655908e-c924-46d6-bc32-5ff1a6f5229f" containerName="route-controller-manager" containerID="cri-o://559e78b079961c86ec1801633d8fad479780fd7f46ee9e1871c4b77a10f37d8b" gracePeriod=30 Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.472366 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.647319 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c655908e-c924-46d6-bc32-5ff1a6f5229f-config\") pod \"c655908e-c924-46d6-bc32-5ff1a6f5229f\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.647382 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlrrb\" (UniqueName: \"kubernetes.io/projected/c655908e-c924-46d6-bc32-5ff1a6f5229f-kube-api-access-mlrrb\") pod \"c655908e-c924-46d6-bc32-5ff1a6f5229f\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.647422 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c655908e-c924-46d6-bc32-5ff1a6f5229f-serving-cert\") pod \"c655908e-c924-46d6-bc32-5ff1a6f5229f\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.647461 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c655908e-c924-46d6-bc32-5ff1a6f5229f-client-ca\") pod \"c655908e-c924-46d6-bc32-5ff1a6f5229f\" (UID: \"c655908e-c924-46d6-bc32-5ff1a6f5229f\") " Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.648487 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c655908e-c924-46d6-bc32-5ff1a6f5229f-client-ca" (OuterVolumeSpecName: "client-ca") pod "c655908e-c924-46d6-bc32-5ff1a6f5229f" (UID: "c655908e-c924-46d6-bc32-5ff1a6f5229f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.648992 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c655908e-c924-46d6-bc32-5ff1a6f5229f-config" (OuterVolumeSpecName: "config") pod "c655908e-c924-46d6-bc32-5ff1a6f5229f" (UID: "c655908e-c924-46d6-bc32-5ff1a6f5229f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.655371 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c655908e-c924-46d6-bc32-5ff1a6f5229f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c655908e-c924-46d6-bc32-5ff1a6f5229f" (UID: "c655908e-c924-46d6-bc32-5ff1a6f5229f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.655390 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c655908e-c924-46d6-bc32-5ff1a6f5229f-kube-api-access-mlrrb" (OuterVolumeSpecName: "kube-api-access-mlrrb") pod "c655908e-c924-46d6-bc32-5ff1a6f5229f" (UID: "c655908e-c924-46d6-bc32-5ff1a6f5229f"). InnerVolumeSpecName "kube-api-access-mlrrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.749568 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c655908e-c924-46d6-bc32-5ff1a6f5229f-client-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.749624 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c655908e-c924-46d6-bc32-5ff1a6f5229f-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.749686 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlrrb\" (UniqueName: \"kubernetes.io/projected/c655908e-c924-46d6-bc32-5ff1a6f5229f-kube-api-access-mlrrb\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.749702 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c655908e-c924-46d6-bc32-5ff1a6f5229f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.858559 4792 generic.go:334] "Generic (PLEG): container finished" podID="c655908e-c924-46d6-bc32-5ff1a6f5229f" containerID="559e78b079961c86ec1801633d8fad479780fd7f46ee9e1871c4b77a10f37d8b" exitCode=0 Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.858630 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" event={"ID":"c655908e-c924-46d6-bc32-5ff1a6f5229f","Type":"ContainerDied","Data":"559e78b079961c86ec1801633d8fad479780fd7f46ee9e1871c4b77a10f37d8b"} Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.858747 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" event={"ID":"c655908e-c924-46d6-bc32-5ff1a6f5229f","Type":"ContainerDied","Data":"354732deb7abcb26a31722e4804186add406d044b5eb44dcf5ced239e1e7ed25"} Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.858753 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt" Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.858782 4792 scope.go:117] "RemoveContainer" containerID="559e78b079961c86ec1801633d8fad479780fd7f46ee9e1871c4b77a10f37d8b" Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.874789 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt"] Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.880963 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-96f6598df-hlzlt"] Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.900639 4792 scope.go:117] "RemoveContainer" containerID="559e78b079961c86ec1801633d8fad479780fd7f46ee9e1871c4b77a10f37d8b" Nov 27 17:15:16 crc kubenswrapper[4792]: E1127 17:15:16.901835 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"559e78b079961c86ec1801633d8fad479780fd7f46ee9e1871c4b77a10f37d8b\": container with ID starting with 559e78b079961c86ec1801633d8fad479780fd7f46ee9e1871c4b77a10f37d8b not found: ID does not exist" containerID="559e78b079961c86ec1801633d8fad479780fd7f46ee9e1871c4b77a10f37d8b" Nov 27 17:15:16 crc kubenswrapper[4792]: I1127 17:15:16.901890 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"559e78b079961c86ec1801633d8fad479780fd7f46ee9e1871c4b77a10f37d8b"} err="failed to get container status \"559e78b079961c86ec1801633d8fad479780fd7f46ee9e1871c4b77a10f37d8b\": rpc error: code = NotFound desc = could not find container \"559e78b079961c86ec1801633d8fad479780fd7f46ee9e1871c4b77a10f37d8b\": container with ID starting with 559e78b079961c86ec1801633d8fad479780fd7f46ee9e1871c4b77a10f37d8b not found: ID does not exist" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.155315 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2"] Nov 27 17:15:17 crc kubenswrapper[4792]: E1127 17:15:17.155516 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c655908e-c924-46d6-bc32-5ff1a6f5229f" containerName="route-controller-manager" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.155529 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c655908e-c924-46d6-bc32-5ff1a6f5229f" containerName="route-controller-manager" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.155659 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c655908e-c924-46d6-bc32-5ff1a6f5229f" containerName="route-controller-manager" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.156022 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.160485 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.160634 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.160784 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.160897 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.161077 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.161206 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.172896 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2"] Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.258354 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfd14db3-ab73-46b2-9442-6f9ab63b257d-config\") pod \"route-controller-manager-6ddb95cdcf-tnnp2\" (UID: \"cfd14db3-ab73-46b2-9442-6f9ab63b257d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.258439 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnrk8\" (UniqueName: \"kubernetes.io/projected/cfd14db3-ab73-46b2-9442-6f9ab63b257d-kube-api-access-mnrk8\") pod \"route-controller-manager-6ddb95cdcf-tnnp2\" (UID: \"cfd14db3-ab73-46b2-9442-6f9ab63b257d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.258572 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfd14db3-ab73-46b2-9442-6f9ab63b257d-client-ca\") pod \"route-controller-manager-6ddb95cdcf-tnnp2\" (UID: \"cfd14db3-ab73-46b2-9442-6f9ab63b257d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.258884 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfd14db3-ab73-46b2-9442-6f9ab63b257d-serving-cert\") pod \"route-controller-manager-6ddb95cdcf-tnnp2\" (UID: \"cfd14db3-ab73-46b2-9442-6f9ab63b257d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.360127 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfd14db3-ab73-46b2-9442-6f9ab63b257d-config\") pod \"route-controller-manager-6ddb95cdcf-tnnp2\" (UID: \"cfd14db3-ab73-46b2-9442-6f9ab63b257d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.360181 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnrk8\" (UniqueName: \"kubernetes.io/projected/cfd14db3-ab73-46b2-9442-6f9ab63b257d-kube-api-access-mnrk8\") pod \"route-controller-manager-6ddb95cdcf-tnnp2\" (UID: \"cfd14db3-ab73-46b2-9442-6f9ab63b257d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.360213 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfd14db3-ab73-46b2-9442-6f9ab63b257d-client-ca\") pod \"route-controller-manager-6ddb95cdcf-tnnp2\" (UID: \"cfd14db3-ab73-46b2-9442-6f9ab63b257d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.360292 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfd14db3-ab73-46b2-9442-6f9ab63b257d-serving-cert\") pod \"route-controller-manager-6ddb95cdcf-tnnp2\" (UID: \"cfd14db3-ab73-46b2-9442-6f9ab63b257d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.361239 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfd14db3-ab73-46b2-9442-6f9ab63b257d-client-ca\") pod \"route-controller-manager-6ddb95cdcf-tnnp2\" (UID: \"cfd14db3-ab73-46b2-9442-6f9ab63b257d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.361672 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfd14db3-ab73-46b2-9442-6f9ab63b257d-config\") pod \"route-controller-manager-6ddb95cdcf-tnnp2\" (UID: \"cfd14db3-ab73-46b2-9442-6f9ab63b257d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.363878 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfd14db3-ab73-46b2-9442-6f9ab63b257d-serving-cert\") pod \"route-controller-manager-6ddb95cdcf-tnnp2\" (UID: \"cfd14db3-ab73-46b2-9442-6f9ab63b257d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.376934 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnrk8\" (UniqueName: \"kubernetes.io/projected/cfd14db3-ab73-46b2-9442-6f9ab63b257d-kube-api-access-mnrk8\") pod \"route-controller-manager-6ddb95cdcf-tnnp2\" (UID: \"cfd14db3-ab73-46b2-9442-6f9ab63b257d\") " pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.526282 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:17 crc kubenswrapper[4792]: I1127 17:15:17.957955 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2"] Nov 27 17:15:17 crc kubenswrapper[4792]: W1127 17:15:17.964741 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfd14db3_ab73_46b2_9442_6f9ab63b257d.slice/crio-2e40cada09c42d33854d816d55dac6dde70c04bf96ef45a79445def0f6c40a82 WatchSource:0}: Error finding container 2e40cada09c42d33854d816d55dac6dde70c04bf96ef45a79445def0f6c40a82: Status 404 returned error can't find the container with id 2e40cada09c42d33854d816d55dac6dde70c04bf96ef45a79445def0f6c40a82 Nov 27 17:15:18 crc kubenswrapper[4792]: I1127 17:15:18.693716 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c655908e-c924-46d6-bc32-5ff1a6f5229f" path="/var/lib/kubelet/pods/c655908e-c924-46d6-bc32-5ff1a6f5229f/volumes" Nov 27 17:15:18 crc kubenswrapper[4792]: I1127 17:15:18.879343 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" event={"ID":"cfd14db3-ab73-46b2-9442-6f9ab63b257d","Type":"ContainerStarted","Data":"508f0771d81104d9a9a8112c5ddd047a55f58770fac3ae290881360366127d91"} Nov 27 17:15:18 crc kubenswrapper[4792]: I1127 17:15:18.879402 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" event={"ID":"cfd14db3-ab73-46b2-9442-6f9ab63b257d","Type":"ContainerStarted","Data":"2e40cada09c42d33854d816d55dac6dde70c04bf96ef45a79445def0f6c40a82"} Nov 27 17:15:18 crc kubenswrapper[4792]: I1127 17:15:18.879586 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:18 crc kubenswrapper[4792]: I1127 17:15:18.887570 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" Nov 27 17:15:18 crc kubenswrapper[4792]: I1127 17:15:18.897134 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6ddb95cdcf-tnnp2" podStartSLOduration=2.897117756 podStartE2EDuration="2.897117756s" podCreationTimestamp="2025-11-27 17:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:15:18.895488185 +0000 UTC m=+341.238314503" watchObservedRunningTime="2025-11-27 17:15:18.897117756 +0000 UTC m=+341.239944074" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.240807 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75795fb4fc-4dxjq"] Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.241501 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" podUID="b092c25d-5169-473c-8240-4b47afbf609a" containerName="controller-manager" containerID="cri-o://28c75967833b8fc1437a7f7aac61892df2c7613d128c079793568e306d443dbe" gracePeriod=30 Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.796672 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.838315 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x54dj\" (UniqueName: \"kubernetes.io/projected/b092c25d-5169-473c-8240-4b47afbf609a-kube-api-access-x54dj\") pod \"b092c25d-5169-473c-8240-4b47afbf609a\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.838743 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-proxy-ca-bundles\") pod \"b092c25d-5169-473c-8240-4b47afbf609a\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.838791 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-client-ca\") pod \"b092c25d-5169-473c-8240-4b47afbf609a\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.838845 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-config\") pod \"b092c25d-5169-473c-8240-4b47afbf609a\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.838866 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b092c25d-5169-473c-8240-4b47afbf609a-serving-cert\") pod \"b092c25d-5169-473c-8240-4b47afbf609a\" (UID: \"b092c25d-5169-473c-8240-4b47afbf609a\") " Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.839538 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-client-ca" (OuterVolumeSpecName: "client-ca") pod "b092c25d-5169-473c-8240-4b47afbf609a" (UID: "b092c25d-5169-473c-8240-4b47afbf609a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.839556 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b092c25d-5169-473c-8240-4b47afbf609a" (UID: "b092c25d-5169-473c-8240-4b47afbf609a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.840180 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-config" (OuterVolumeSpecName: "config") pod "b092c25d-5169-473c-8240-4b47afbf609a" (UID: "b092c25d-5169-473c-8240-4b47afbf609a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.848955 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b092c25d-5169-473c-8240-4b47afbf609a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b092c25d-5169-473c-8240-4b47afbf609a" (UID: "b092c25d-5169-473c-8240-4b47afbf609a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.848990 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b092c25d-5169-473c-8240-4b47afbf609a-kube-api-access-x54dj" (OuterVolumeSpecName: "kube-api-access-x54dj") pod "b092c25d-5169-473c-8240-4b47afbf609a" (UID: "b092c25d-5169-473c-8240-4b47afbf609a"). InnerVolumeSpecName "kube-api-access-x54dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.940134 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x54dj\" (UniqueName: \"kubernetes.io/projected/b092c25d-5169-473c-8240-4b47afbf609a-kube-api-access-x54dj\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.940171 4792 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.940182 4792 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-client-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.940217 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b092c25d-5169-473c-8240-4b47afbf609a-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.940226 4792 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b092c25d-5169-473c-8240-4b47afbf609a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.947437 4792 generic.go:334] "Generic (PLEG): container finished" podID="b092c25d-5169-473c-8240-4b47afbf609a" containerID="28c75967833b8fc1437a7f7aac61892df2c7613d128c079793568e306d443dbe" exitCode=0 Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.947500 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" event={"ID":"b092c25d-5169-473c-8240-4b47afbf609a","Type":"ContainerDied","Data":"28c75967833b8fc1437a7f7aac61892df2c7613d128c079793568e306d443dbe"} Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.947542 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" event={"ID":"b092c25d-5169-473c-8240-4b47afbf609a","Type":"ContainerDied","Data":"af2e39bd46506358c04f5a8bae9a1702714c97ca076290b44b43fc51c63a4be0"} Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.947573 4792 scope.go:117] "RemoveContainer" containerID="28c75967833b8fc1437a7f7aac61892df2c7613d128c079793568e306d443dbe" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.947794 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75795fb4fc-4dxjq" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.965289 4792 scope.go:117] "RemoveContainer" containerID="28c75967833b8fc1437a7f7aac61892df2c7613d128c079793568e306d443dbe" Nov 27 17:15:29 crc kubenswrapper[4792]: E1127 17:15:29.965663 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28c75967833b8fc1437a7f7aac61892df2c7613d128c079793568e306d443dbe\": container with ID starting with 28c75967833b8fc1437a7f7aac61892df2c7613d128c079793568e306d443dbe not found: ID does not exist" containerID="28c75967833b8fc1437a7f7aac61892df2c7613d128c079793568e306d443dbe" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.965709 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c75967833b8fc1437a7f7aac61892df2c7613d128c079793568e306d443dbe"} err="failed to get container status \"28c75967833b8fc1437a7f7aac61892df2c7613d128c079793568e306d443dbe\": rpc error: code = NotFound desc = could not find container \"28c75967833b8fc1437a7f7aac61892df2c7613d128c079793568e306d443dbe\": container with ID starting with 28c75967833b8fc1437a7f7aac61892df2c7613d128c079793568e306d443dbe not found: ID does not exist" Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.986805 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75795fb4fc-4dxjq"] Nov 27 17:15:29 crc kubenswrapper[4792]: I1127 17:15:29.990157 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-75795fb4fc-4dxjq"] Nov 27 17:15:30 crc kubenswrapper[4792]: I1127 17:15:30.694005 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b092c25d-5169-473c-8240-4b47afbf609a" path="/var/lib/kubelet/pods/b092c25d-5169-473c-8240-4b47afbf609a/volumes" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.166295 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cbb956dd9-zcszr"] Nov 27 17:15:31 crc kubenswrapper[4792]: E1127 17:15:31.166499 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b092c25d-5169-473c-8240-4b47afbf609a" containerName="controller-manager" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.166512 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b092c25d-5169-473c-8240-4b47afbf609a" containerName="controller-manager" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.166633 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b092c25d-5169-473c-8240-4b47afbf609a" containerName="controller-manager" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.167048 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.169277 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.170561 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.170882 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.171038 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.171151 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.171740 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.179936 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.193952 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cbb956dd9-zcszr"] Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.254052 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ec2ae44-1f9c-4b77-9556-8ed325bb279d-client-ca\") pod \"controller-manager-5cbb956dd9-zcszr\" (UID: \"2ec2ae44-1f9c-4b77-9556-8ed325bb279d\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.254111 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec2ae44-1f9c-4b77-9556-8ed325bb279d-config\") pod \"controller-manager-5cbb956dd9-zcszr\" (UID: \"2ec2ae44-1f9c-4b77-9556-8ed325bb279d\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.254131 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec2ae44-1f9c-4b77-9556-8ed325bb279d-serving-cert\") pod \"controller-manager-5cbb956dd9-zcszr\" (UID: \"2ec2ae44-1f9c-4b77-9556-8ed325bb279d\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.254167 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ec2ae44-1f9c-4b77-9556-8ed325bb279d-proxy-ca-bundles\") pod \"controller-manager-5cbb956dd9-zcszr\" (UID: \"2ec2ae44-1f9c-4b77-9556-8ed325bb279d\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.254219 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxpzq\" (UniqueName: \"kubernetes.io/projected/2ec2ae44-1f9c-4b77-9556-8ed325bb279d-kube-api-access-nxpzq\") pod \"controller-manager-5cbb956dd9-zcszr\" (UID: \"2ec2ae44-1f9c-4b77-9556-8ed325bb279d\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.355815 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxpzq\" (UniqueName: \"kubernetes.io/projected/2ec2ae44-1f9c-4b77-9556-8ed325bb279d-kube-api-access-nxpzq\") pod \"controller-manager-5cbb956dd9-zcszr\" (UID: \"2ec2ae44-1f9c-4b77-9556-8ed325bb279d\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.355898 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ec2ae44-1f9c-4b77-9556-8ed325bb279d-client-ca\") pod \"controller-manager-5cbb956dd9-zcszr\" (UID: \"2ec2ae44-1f9c-4b77-9556-8ed325bb279d\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.355977 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec2ae44-1f9c-4b77-9556-8ed325bb279d-config\") pod \"controller-manager-5cbb956dd9-zcszr\" (UID: \"2ec2ae44-1f9c-4b77-9556-8ed325bb279d\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.356016 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec2ae44-1f9c-4b77-9556-8ed325bb279d-serving-cert\") pod \"controller-manager-5cbb956dd9-zcszr\" (UID: \"2ec2ae44-1f9c-4b77-9556-8ed325bb279d\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.356080 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ec2ae44-1f9c-4b77-9556-8ed325bb279d-proxy-ca-bundles\") pod \"controller-manager-5cbb956dd9-zcszr\" (UID: \"2ec2ae44-1f9c-4b77-9556-8ed325bb279d\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.357604 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ec2ae44-1f9c-4b77-9556-8ed325bb279d-client-ca\") pod \"controller-manager-5cbb956dd9-zcszr\" (UID: \"2ec2ae44-1f9c-4b77-9556-8ed325bb279d\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.357690 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ec2ae44-1f9c-4b77-9556-8ed325bb279d-config\") pod \"controller-manager-5cbb956dd9-zcszr\" (UID: \"2ec2ae44-1f9c-4b77-9556-8ed325bb279d\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.358955 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ec2ae44-1f9c-4b77-9556-8ed325bb279d-proxy-ca-bundles\") pod \"controller-manager-5cbb956dd9-zcszr\" (UID: \"2ec2ae44-1f9c-4b77-9556-8ed325bb279d\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.360785 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ec2ae44-1f9c-4b77-9556-8ed325bb279d-serving-cert\") pod \"controller-manager-5cbb956dd9-zcszr\" (UID: \"2ec2ae44-1f9c-4b77-9556-8ed325bb279d\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.386778 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxpzq\" (UniqueName: \"kubernetes.io/projected/2ec2ae44-1f9c-4b77-9556-8ed325bb279d-kube-api-access-nxpzq\") pod \"controller-manager-5cbb956dd9-zcszr\" (UID: \"2ec2ae44-1f9c-4b77-9556-8ed325bb279d\") " pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.491437 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.901793 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cbb956dd9-zcszr"] Nov 27 17:15:31 crc kubenswrapper[4792]: W1127 17:15:31.910285 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ec2ae44_1f9c_4b77_9556_8ed325bb279d.slice/crio-346cdf64453c99cf99e29d5e576cad62a6cad58b8fcb55b51c66b19854f15453 WatchSource:0}: Error finding container 346cdf64453c99cf99e29d5e576cad62a6cad58b8fcb55b51c66b19854f15453: Status 404 returned error can't find the container with id 346cdf64453c99cf99e29d5e576cad62a6cad58b8fcb55b51c66b19854f15453 Nov 27 17:15:31 crc kubenswrapper[4792]: I1127 17:15:31.962895 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" event={"ID":"2ec2ae44-1f9c-4b77-9556-8ed325bb279d","Type":"ContainerStarted","Data":"346cdf64453c99cf99e29d5e576cad62a6cad58b8fcb55b51c66b19854f15453"} Nov 27 17:15:32 crc kubenswrapper[4792]: I1127 17:15:32.970063 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" event={"ID":"2ec2ae44-1f9c-4b77-9556-8ed325bb279d","Type":"ContainerStarted","Data":"bd839be4c453e631ab0b3e234ebb066c940df659b451d611ee26fa1ad3269495"} Nov 27 17:15:32 crc kubenswrapper[4792]: I1127 17:15:32.970441 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:32 crc kubenswrapper[4792]: I1127 17:15:32.976091 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" Nov 27 17:15:33 crc kubenswrapper[4792]: I1127 17:15:33.010268 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cbb956dd9-zcszr" podStartSLOduration=4.010230864 podStartE2EDuration="4.010230864s" podCreationTimestamp="2025-11-27 17:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:15:32.987856159 +0000 UTC m=+355.330682477" watchObservedRunningTime="2025-11-27 17:15:33.010230864 +0000 UTC m=+355.353057182" Nov 27 17:15:38 crc kubenswrapper[4792]: I1127 17:15:38.289962 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:15:38 crc kubenswrapper[4792]: I1127 17:15:38.290459 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:15:38 crc kubenswrapper[4792]: I1127 17:15:38.882796 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" podUID="becf7050-f3f8-42a3-bf02-cf9347e493e6" containerName="registry" containerID="cri-o://cc35ee17afe79a4926329533bf444b4f93481c5717f95194f53cc799bd6bba52" gracePeriod=30 Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.013583 4792 generic.go:334] "Generic (PLEG): container finished" podID="becf7050-f3f8-42a3-bf02-cf9347e493e6" containerID="cc35ee17afe79a4926329533bf444b4f93481c5717f95194f53cc799bd6bba52" exitCode=0 Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.013698 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" event={"ID":"becf7050-f3f8-42a3-bf02-cf9347e493e6","Type":"ContainerDied","Data":"cc35ee17afe79a4926329533bf444b4f93481c5717f95194f53cc799bd6bba52"} Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.217493 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.292394 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"becf7050-f3f8-42a3-bf02-cf9347e493e6\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.292453 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj2lc\" (UniqueName: \"kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-kube-api-access-bj2lc\") pod \"becf7050-f3f8-42a3-bf02-cf9347e493e6\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.292475 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/becf7050-f3f8-42a3-bf02-cf9347e493e6-installation-pull-secrets\") pod \"becf7050-f3f8-42a3-bf02-cf9347e493e6\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.292546 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/becf7050-f3f8-42a3-bf02-cf9347e493e6-trusted-ca\") pod \"becf7050-f3f8-42a3-bf02-cf9347e493e6\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.292632 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/becf7050-f3f8-42a3-bf02-cf9347e493e6-ca-trust-extracted\") pod \"becf7050-f3f8-42a3-bf02-cf9347e493e6\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.292682 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-bound-sa-token\") pod \"becf7050-f3f8-42a3-bf02-cf9347e493e6\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.292701 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/becf7050-f3f8-42a3-bf02-cf9347e493e6-registry-certificates\") pod \"becf7050-f3f8-42a3-bf02-cf9347e493e6\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.292729 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-registry-tls\") pod \"becf7050-f3f8-42a3-bf02-cf9347e493e6\" (UID: \"becf7050-f3f8-42a3-bf02-cf9347e493e6\") " Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.293984 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/becf7050-f3f8-42a3-bf02-cf9347e493e6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "becf7050-f3f8-42a3-bf02-cf9347e493e6" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.294177 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/becf7050-f3f8-42a3-bf02-cf9347e493e6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "becf7050-f3f8-42a3-bf02-cf9347e493e6" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.300028 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-kube-api-access-bj2lc" (OuterVolumeSpecName: "kube-api-access-bj2lc") pod "becf7050-f3f8-42a3-bf02-cf9347e493e6" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6"). InnerVolumeSpecName "kube-api-access-bj2lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.300055 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becf7050-f3f8-42a3-bf02-cf9347e493e6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "becf7050-f3f8-42a3-bf02-cf9347e493e6" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.306801 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "becf7050-f3f8-42a3-bf02-cf9347e493e6" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.307042 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "becf7050-f3f8-42a3-bf02-cf9347e493e6" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.310258 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "becf7050-f3f8-42a3-bf02-cf9347e493e6" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.311012 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/becf7050-f3f8-42a3-bf02-cf9347e493e6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "becf7050-f3f8-42a3-bf02-cf9347e493e6" (UID: "becf7050-f3f8-42a3-bf02-cf9347e493e6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.394364 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj2lc\" (UniqueName: \"kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-kube-api-access-bj2lc\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.394405 4792 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/becf7050-f3f8-42a3-bf02-cf9347e493e6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.394420 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/becf7050-f3f8-42a3-bf02-cf9347e493e6-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.394452 4792 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/becf7050-f3f8-42a3-bf02-cf9347e493e6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.394465 4792 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/becf7050-f3f8-42a3-bf02-cf9347e493e6-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.394476 4792 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:40 crc kubenswrapper[4792]: I1127 17:15:40.394488 4792 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/becf7050-f3f8-42a3-bf02-cf9347e493e6-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 27 17:15:41 crc kubenswrapper[4792]: I1127 17:15:41.021031 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" event={"ID":"becf7050-f3f8-42a3-bf02-cf9347e493e6","Type":"ContainerDied","Data":"58b9d8da0197d7503bf038d6572e25c2bc9afabb893cfd6d46ad6bd2480c1b82"} Nov 27 17:15:41 crc kubenswrapper[4792]: I1127 17:15:41.021083 4792 scope.go:117] "RemoveContainer" containerID="cc35ee17afe79a4926329533bf444b4f93481c5717f95194f53cc799bd6bba52" Nov 27 17:15:41 crc kubenswrapper[4792]: I1127 17:15:41.021202 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ft9s4" Nov 27 17:15:41 crc kubenswrapper[4792]: I1127 17:15:41.039058 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ft9s4"] Nov 27 17:15:41 crc kubenswrapper[4792]: I1127 17:15:41.044469 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ft9s4"] Nov 27 17:15:42 crc kubenswrapper[4792]: I1127 17:15:42.697394 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="becf7050-f3f8-42a3-bf02-cf9347e493e6" path="/var/lib/kubelet/pods/becf7050-f3f8-42a3-bf02-cf9347e493e6/volumes" Nov 27 17:15:47 crc kubenswrapper[4792]: I1127 17:15:47.486288 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-xmhbm\" (UID: \"7a6db3f4-940f-4d6f-892d-d6f5003bd881\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" Nov 27 17:15:47 crc kubenswrapper[4792]: I1127 17:15:47.493977 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/7a6db3f4-940f-4d6f-892d-d6f5003bd881-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-xmhbm\" (UID: \"7a6db3f4-940f-4d6f-892d-d6f5003bd881\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" Nov 27 17:15:47 crc kubenswrapper[4792]: I1127 17:15:47.523465 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" Nov 27 17:15:47 crc kubenswrapper[4792]: I1127 17:15:47.968264 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm"] Nov 27 17:15:48 crc kubenswrapper[4792]: I1127 17:15:48.061386 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" event={"ID":"7a6db3f4-940f-4d6f-892d-d6f5003bd881","Type":"ContainerStarted","Data":"1c2344cd595852c9e20e4419e48850b47fbb28e3602d3531f3877a9c34203e3d"} Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.078237 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" event={"ID":"7a6db3f4-940f-4d6f-892d-d6f5003bd881","Type":"ContainerStarted","Data":"1f8e4fa7ab7b94916aed9ca4f86443ceefe01c5d3167c274a2e9c2c3723a281a"} Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.078927 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.086551 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.102054 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-xmhbm" podStartSLOduration=65.905821523 podStartE2EDuration="1m8.102036s" podCreationTimestamp="2025-11-27 17:14:43 +0000 UTC" firstStartedPulling="2025-11-27 17:15:47.978220719 +0000 UTC m=+370.321047027" lastFinishedPulling="2025-11-27 17:15:50.174435186 +0000 UTC m=+372.517261504" observedRunningTime="2025-11-27 17:15:51.100908672 +0000 UTC m=+373.443734990" watchObservedRunningTime="2025-11-27 17:15:51.102036 +0000 UTC m=+373.444862318" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.671450 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-7w64n"] Nov 27 17:15:51 crc kubenswrapper[4792]: E1127 17:15:51.671728 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="becf7050-f3f8-42a3-bf02-cf9347e493e6" containerName="registry" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.671752 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="becf7050-f3f8-42a3-bf02-cf9347e493e6" containerName="registry" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.671882 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="becf7050-f3f8-42a3-bf02-cf9347e493e6" containerName="registry" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.672545 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.674467 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.674636 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.675268 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.675474 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-5hpxd" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.688251 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-7w64n"] Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.787461 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6955e41e-c3ed-4248-a2f1-81b30aa72c22-metrics-client-ca\") pod \"prometheus-operator-db54df47d-7w64n\" (UID: \"6955e41e-c3ed-4248-a2f1-81b30aa72c22\") " pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.789063 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2cjw\" (UniqueName: \"kubernetes.io/projected/6955e41e-c3ed-4248-a2f1-81b30aa72c22-kube-api-access-t2cjw\") pod \"prometheus-operator-db54df47d-7w64n\" (UID: \"6955e41e-c3ed-4248-a2f1-81b30aa72c22\") " pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.789096 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/6955e41e-c3ed-4248-a2f1-81b30aa72c22-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-7w64n\" (UID: \"6955e41e-c3ed-4248-a2f1-81b30aa72c22\") " pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.789228 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6955e41e-c3ed-4248-a2f1-81b30aa72c22-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-7w64n\" (UID: \"6955e41e-c3ed-4248-a2f1-81b30aa72c22\") " pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.890977 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6955e41e-c3ed-4248-a2f1-81b30aa72c22-metrics-client-ca\") pod \"prometheus-operator-db54df47d-7w64n\" (UID: \"6955e41e-c3ed-4248-a2f1-81b30aa72c22\") " pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.891062 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/6955e41e-c3ed-4248-a2f1-81b30aa72c22-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-7w64n\" (UID: \"6955e41e-c3ed-4248-a2f1-81b30aa72c22\") " pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.891092 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2cjw\" (UniqueName: \"kubernetes.io/projected/6955e41e-c3ed-4248-a2f1-81b30aa72c22-kube-api-access-t2cjw\") pod \"prometheus-operator-db54df47d-7w64n\" (UID: \"6955e41e-c3ed-4248-a2f1-81b30aa72c22\") " pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.891148 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6955e41e-c3ed-4248-a2f1-81b30aa72c22-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-7w64n\" (UID: \"6955e41e-c3ed-4248-a2f1-81b30aa72c22\") " pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.891937 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6955e41e-c3ed-4248-a2f1-81b30aa72c22-metrics-client-ca\") pod \"prometheus-operator-db54df47d-7w64n\" (UID: \"6955e41e-c3ed-4248-a2f1-81b30aa72c22\") " pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.898568 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/6955e41e-c3ed-4248-a2f1-81b30aa72c22-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-7w64n\" (UID: \"6955e41e-c3ed-4248-a2f1-81b30aa72c22\") " pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.898623 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6955e41e-c3ed-4248-a2f1-81b30aa72c22-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-7w64n\" (UID: \"6955e41e-c3ed-4248-a2f1-81b30aa72c22\") " pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.912268 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2cjw\" (UniqueName: \"kubernetes.io/projected/6955e41e-c3ed-4248-a2f1-81b30aa72c22-kube-api-access-t2cjw\") pod \"prometheus-operator-db54df47d-7w64n\" (UID: \"6955e41e-c3ed-4248-a2f1-81b30aa72c22\") " pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" Nov 27 17:15:51 crc kubenswrapper[4792]: I1127 17:15:51.988924 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" Nov 27 17:15:52 crc kubenswrapper[4792]: I1127 17:15:52.369667 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-7w64n"] Nov 27 17:15:52 crc kubenswrapper[4792]: W1127 17:15:52.382228 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6955e41e_c3ed_4248_a2f1_81b30aa72c22.slice/crio-4330de1cbc6e254e3aec9eabc341e93f9155427f526e52826e6608c0d5df75e8 WatchSource:0}: Error finding container 4330de1cbc6e254e3aec9eabc341e93f9155427f526e52826e6608c0d5df75e8: Status 404 returned error can't find the container with id 4330de1cbc6e254e3aec9eabc341e93f9155427f526e52826e6608c0d5df75e8 Nov 27 17:15:53 crc kubenswrapper[4792]: I1127 17:15:53.097070 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" event={"ID":"6955e41e-c3ed-4248-a2f1-81b30aa72c22","Type":"ContainerStarted","Data":"4330de1cbc6e254e3aec9eabc341e93f9155427f526e52826e6608c0d5df75e8"} Nov 27 17:15:54 crc kubenswrapper[4792]: I1127 17:15:54.104619 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" event={"ID":"6955e41e-c3ed-4248-a2f1-81b30aa72c22","Type":"ContainerStarted","Data":"7b2a7eec908060af35243ba5fc66e4265d60c8d8019d41e53cb9fea5a5b476ba"} Nov 27 17:15:55 crc kubenswrapper[4792]: I1127 17:15:55.115939 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" event={"ID":"6955e41e-c3ed-4248-a2f1-81b30aa72c22","Type":"ContainerStarted","Data":"1f6b488b8eddaf62d9684ea45be9e92c64aaa4f5a1d9e8da34cf87f07b88c42a"} Nov 27 17:15:55 crc kubenswrapper[4792]: I1127 17:15:55.141727 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-7w64n" podStartSLOduration=2.585764335 podStartE2EDuration="4.141676949s" podCreationTimestamp="2025-11-27 17:15:51 +0000 UTC" firstStartedPulling="2025-11-27 17:15:52.385346165 +0000 UTC m=+374.728172493" lastFinishedPulling="2025-11-27 17:15:53.941258759 +0000 UTC m=+376.284085107" observedRunningTime="2025-11-27 17:15:55.14130297 +0000 UTC m=+377.484129328" watchObservedRunningTime="2025-11-27 17:15:55.141676949 +0000 UTC m=+377.484503307" Nov 27 17:15:56 crc kubenswrapper[4792]: I1127 17:15:56.915233 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc"] Nov 27 17:15:56 crc kubenswrapper[4792]: I1127 17:15:56.916747 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" Nov 27 17:15:56 crc kubenswrapper[4792]: I1127 17:15:56.920135 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Nov 27 17:15:56 crc kubenswrapper[4792]: I1127 17:15:56.920669 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Nov 27 17:15:56 crc kubenswrapper[4792]: I1127 17:15:56.920893 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-sdnrs" Nov 27 17:15:56 crc kubenswrapper[4792]: I1127 17:15:56.927087 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc"] Nov 27 17:15:56 crc kubenswrapper[4792]: I1127 17:15:56.995395 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p"] Nov 27 17:15:56 crc kubenswrapper[4792]: I1127 17:15:56.996380 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:56 crc kubenswrapper[4792]: I1127 17:15:56.998497 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-9jmrk" Nov 27 17:15:56 crc kubenswrapper[4792]: I1127 17:15:56.998587 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Nov 27 17:15:56 crc kubenswrapper[4792]: I1127 17:15:56.998890 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.000725 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.009459 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p"] Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.034292 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-7zcts"] Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.035723 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.038359 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-8p5jc" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.038536 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.038686 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.059050 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b259275d-d7e5-4973-83d5-e0df6620cf1b-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-8rwfc\" (UID: \"b259275d-d7e5-4973-83d5-e0df6620cf1b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.059118 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b259275d-d7e5-4973-83d5-e0df6620cf1b-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8rwfc\" (UID: \"b259275d-d7e5-4973-83d5-e0df6620cf1b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.059182 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2fxm\" (UniqueName: \"kubernetes.io/projected/b259275d-d7e5-4973-83d5-e0df6620cf1b-kube-api-access-v2fxm\") pod \"openshift-state-metrics-566fddb674-8rwfc\" (UID: \"b259275d-d7e5-4973-83d5-e0df6620cf1b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.059231 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b259275d-d7e5-4973-83d5-e0df6620cf1b-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-8rwfc\" (UID: \"b259275d-d7e5-4973-83d5-e0df6620cf1b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160266 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1bd0979d-914c-4abe-a0bf-aab589347139-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160322 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2fxm\" (UniqueName: \"kubernetes.io/projected/b259275d-d7e5-4973-83d5-e0df6620cf1b-kube-api-access-v2fxm\") pod \"openshift-state-metrics-566fddb674-8rwfc\" (UID: \"b259275d-d7e5-4973-83d5-e0df6620cf1b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160356 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/1bd0979d-914c-4abe-a0bf-aab589347139-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160380 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/1bd0979d-914c-4abe-a0bf-aab589347139-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160417 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/96621422-2758-47a9-bd86-2d31ce0e4cf0-node-exporter-wtmp\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160442 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b259275d-d7e5-4973-83d5-e0df6620cf1b-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-8rwfc\" (UID: \"b259275d-d7e5-4973-83d5-e0df6620cf1b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160467 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/96621422-2758-47a9-bd86-2d31ce0e4cf0-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160496 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/96621422-2758-47a9-bd86-2d31ce0e4cf0-node-exporter-textfile\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160523 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1bd0979d-914c-4abe-a0bf-aab589347139-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160544 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/96621422-2758-47a9-bd86-2d31ce0e4cf0-sys\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160570 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8drpn\" (UniqueName: \"kubernetes.io/projected/96621422-2758-47a9-bd86-2d31ce0e4cf0-kube-api-access-8drpn\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160595 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b259275d-d7e5-4973-83d5-e0df6620cf1b-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-8rwfc\" (UID: \"b259275d-d7e5-4973-83d5-e0df6620cf1b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160620 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jshcj\" (UniqueName: \"kubernetes.io/projected/1bd0979d-914c-4abe-a0bf-aab589347139-kube-api-access-jshcj\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160670 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/96621422-2758-47a9-bd86-2d31ce0e4cf0-node-exporter-tls\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160693 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1bd0979d-914c-4abe-a0bf-aab589347139-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160719 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/96621422-2758-47a9-bd86-2d31ce0e4cf0-metrics-client-ca\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160751 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/96621422-2758-47a9-bd86-2d31ce0e4cf0-root\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.160773 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b259275d-d7e5-4973-83d5-e0df6620cf1b-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8rwfc\" (UID: \"b259275d-d7e5-4973-83d5-e0df6620cf1b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.162795 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b259275d-d7e5-4973-83d5-e0df6620cf1b-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-8rwfc\" (UID: \"b259275d-d7e5-4973-83d5-e0df6620cf1b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.167120 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b259275d-d7e5-4973-83d5-e0df6620cf1b-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-8rwfc\" (UID: \"b259275d-d7e5-4973-83d5-e0df6620cf1b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.173560 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b259275d-d7e5-4973-83d5-e0df6620cf1b-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-8rwfc\" (UID: \"b259275d-d7e5-4973-83d5-e0df6620cf1b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.193908 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2fxm\" (UniqueName: \"kubernetes.io/projected/b259275d-d7e5-4973-83d5-e0df6620cf1b-kube-api-access-v2fxm\") pod \"openshift-state-metrics-566fddb674-8rwfc\" (UID: \"b259275d-d7e5-4973-83d5-e0df6620cf1b\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.240681 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.262212 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1bd0979d-914c-4abe-a0bf-aab589347139-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.262258 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/1bd0979d-914c-4abe-a0bf-aab589347139-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.262282 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/1bd0979d-914c-4abe-a0bf-aab589347139-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.262319 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/96621422-2758-47a9-bd86-2d31ce0e4cf0-node-exporter-wtmp\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.262350 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/96621422-2758-47a9-bd86-2d31ce0e4cf0-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.262379 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/96621422-2758-47a9-bd86-2d31ce0e4cf0-node-exporter-textfile\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.262403 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/96621422-2758-47a9-bd86-2d31ce0e4cf0-sys\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.262422 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1bd0979d-914c-4abe-a0bf-aab589347139-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.262449 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8drpn\" (UniqueName: \"kubernetes.io/projected/96621422-2758-47a9-bd86-2d31ce0e4cf0-kube-api-access-8drpn\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.262470 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jshcj\" (UniqueName: \"kubernetes.io/projected/1bd0979d-914c-4abe-a0bf-aab589347139-kube-api-access-jshcj\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.262503 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/96621422-2758-47a9-bd86-2d31ce0e4cf0-node-exporter-tls\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.262525 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1bd0979d-914c-4abe-a0bf-aab589347139-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.262551 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/96621422-2758-47a9-bd86-2d31ce0e4cf0-metrics-client-ca\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.262584 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/96621422-2758-47a9-bd86-2d31ce0e4cf0-root\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.262710 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/96621422-2758-47a9-bd86-2d31ce0e4cf0-root\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: E1127 17:15:57.262803 4792 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Nov 27 17:15:57 crc kubenswrapper[4792]: E1127 17:15:57.262861 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96621422-2758-47a9-bd86-2d31ce0e4cf0-node-exporter-tls podName:96621422-2758-47a9-bd86-2d31ce0e4cf0 nodeName:}" failed. No retries permitted until 2025-11-27 17:15:57.76284289 +0000 UTC m=+380.105669208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/96621422-2758-47a9-bd86-2d31ce0e4cf0-node-exporter-tls") pod "node-exporter-7zcts" (UID: "96621422-2758-47a9-bd86-2d31ce0e4cf0") : secret "node-exporter-tls" not found Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.263001 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/1bd0979d-914c-4abe-a0bf-aab589347139-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.263001 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/96621422-2758-47a9-bd86-2d31ce0e4cf0-sys\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.263133 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/96621422-2758-47a9-bd86-2d31ce0e4cf0-node-exporter-wtmp\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.263531 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/1bd0979d-914c-4abe-a0bf-aab589347139-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.263576 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/96621422-2758-47a9-bd86-2d31ce0e4cf0-node-exporter-textfile\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.263937 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/96621422-2758-47a9-bd86-2d31ce0e4cf0-metrics-client-ca\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.264329 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1bd0979d-914c-4abe-a0bf-aab589347139-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.267315 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/96621422-2758-47a9-bd86-2d31ce0e4cf0-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.268193 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1bd0979d-914c-4abe-a0bf-aab589347139-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.270962 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1bd0979d-914c-4abe-a0bf-aab589347139-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.281906 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8drpn\" (UniqueName: \"kubernetes.io/projected/96621422-2758-47a9-bd86-2d31ce0e4cf0-kube-api-access-8drpn\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.283427 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jshcj\" (UniqueName: \"kubernetes.io/projected/1bd0979d-914c-4abe-a0bf-aab589347139-kube-api-access-jshcj\") pod \"kube-state-metrics-777cb5bd5d-hdg6p\" (UID: \"1bd0979d-914c-4abe-a0bf-aab589347139\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.313979 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.695911 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc"] Nov 27 17:15:57 crc kubenswrapper[4792]: W1127 17:15:57.698328 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb259275d_d7e5_4973_83d5_e0df6620cf1b.slice/crio-437b0fdc4bab8d19b64fa50d912a874503b161f02dd0adc4ab7b620b33e8f282 WatchSource:0}: Error finding container 437b0fdc4bab8d19b64fa50d912a874503b161f02dd0adc4ab7b620b33e8f282: Status 404 returned error can't find the container with id 437b0fdc4bab8d19b64fa50d912a874503b161f02dd0adc4ab7b620b33e8f282 Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.756851 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p"] Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.767741 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/96621422-2758-47a9-bd86-2d31ce0e4cf0-node-exporter-tls\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.772702 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/96621422-2758-47a9-bd86-2d31ce0e4cf0-node-exporter-tls\") pod \"node-exporter-7zcts\" (UID: \"96621422-2758-47a9-bd86-2d31ce0e4cf0\") " pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:57 crc kubenswrapper[4792]: I1127 17:15:57.949143 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-7zcts" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.088464 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.090663 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.092254 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.092312 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.092592 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.093516 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.093920 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.094193 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.094407 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-4gm2b" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.097676 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.107214 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.122189 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.133304 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" event={"ID":"1bd0979d-914c-4abe-a0bf-aab589347139","Type":"ContainerStarted","Data":"6ff19bcd10aa54ad89324999941d8a8fe5e69734687bc95b21f090f38fb85a24"} Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.134287 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7zcts" event={"ID":"96621422-2758-47a9-bd86-2d31ce0e4cf0","Type":"ContainerStarted","Data":"60c21be1dcb9900f7d373085b060c0f786287f42591a8c4beb1a07124b50ad6c"} Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.136895 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" event={"ID":"b259275d-d7e5-4973-83d5-e0df6620cf1b","Type":"ContainerStarted","Data":"596f187bb4434fecca6a7e2dc2e66a3a432ba519397ca3a97d17b034ff437962"} Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.136945 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" event={"ID":"b259275d-d7e5-4973-83d5-e0df6620cf1b","Type":"ContainerStarted","Data":"0796d2df31e532bfe992688ad80795789bf165670c81ba657642c853bd4b5944"} Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.136965 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" event={"ID":"b259275d-d7e5-4973-83d5-e0df6620cf1b","Type":"ContainerStarted","Data":"437b0fdc4bab8d19b64fa50d912a874503b161f02dd0adc4ab7b620b33e8f282"} Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.172757 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3bc2beae-626c-47d1-8133-0b2252a25791-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.172802 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.172946 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3bc2beae-626c-47d1-8133-0b2252a25791-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.172996 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3bc2beae-626c-47d1-8133-0b2252a25791-config-out\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.173025 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3bc2beae-626c-47d1-8133-0b2252a25791-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.173073 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-config-volume\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.173090 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bc2beae-626c-47d1-8133-0b2252a25791-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.173154 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g69kx\" (UniqueName: \"kubernetes.io/projected/3bc2beae-626c-47d1-8133-0b2252a25791-kube-api-access-g69kx\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.173174 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-web-config\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.173192 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.173210 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.173247 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.274184 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g69kx\" (UniqueName: \"kubernetes.io/projected/3bc2beae-626c-47d1-8133-0b2252a25791-kube-api-access-g69kx\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.274226 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.274250 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-web-config\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.274272 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.274301 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.274338 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3bc2beae-626c-47d1-8133-0b2252a25791-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.274358 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.274384 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3bc2beae-626c-47d1-8133-0b2252a25791-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.274402 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3bc2beae-626c-47d1-8133-0b2252a25791-config-out\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.274417 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3bc2beae-626c-47d1-8133-0b2252a25791-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.274434 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-config-volume\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.274448 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bc2beae-626c-47d1-8133-0b2252a25791-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.275711 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bc2beae-626c-47d1-8133-0b2252a25791-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.275948 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3bc2beae-626c-47d1-8133-0b2252a25791-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.275999 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3bc2beae-626c-47d1-8133-0b2252a25791-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.279482 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-web-config\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.279569 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.279772 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.280511 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3bc2beae-626c-47d1-8133-0b2252a25791-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.280704 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.281152 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3bc2beae-626c-47d1-8133-0b2252a25791-config-out\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.282257 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.291470 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g69kx\" (UniqueName: \"kubernetes.io/projected/3bc2beae-626c-47d1-8133-0b2252a25791-kube-api-access-g69kx\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.304580 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3bc2beae-626c-47d1-8133-0b2252a25791-config-volume\") pod \"alertmanager-main-0\" (UID: \"3bc2beae-626c-47d1-8133-0b2252a25791\") " pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.406192 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 27 17:15:58 crc kubenswrapper[4792]: I1127 17:15:58.880353 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 27 17:15:58 crc kubenswrapper[4792]: W1127 17:15:58.892676 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bc2beae_626c_47d1_8133_0b2252a25791.slice/crio-f26b8c241d902673f1a71190d38d9ffa3287c1259edb38a51d95b3dd0d37d326 WatchSource:0}: Error finding container f26b8c241d902673f1a71190d38d9ffa3287c1259edb38a51d95b3dd0d37d326: Status 404 returned error can't find the container with id f26b8c241d902673f1a71190d38d9ffa3287c1259edb38a51d95b3dd0d37d326 Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.091174 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-66bf6f75c8-v825n"] Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.093170 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.097423 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.097575 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.097712 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.097994 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.098205 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.101943 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-1d9da0tec8qet" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.102040 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-ldwmv" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.099567 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-66bf6f75c8-v825n"] Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.125197 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.125270 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-thanos-querier-tls\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.125293 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.125348 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/072be099-fcf5-4664-bddf-b189ebfa04b1-metrics-client-ca\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.125367 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsgvl\" (UniqueName: \"kubernetes.io/projected/072be099-fcf5-4664-bddf-b189ebfa04b1-kube-api-access-xsgvl\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.125382 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.125399 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-grpc-tls\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.125417 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.142233 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3bc2beae-626c-47d1-8133-0b2252a25791","Type":"ContainerStarted","Data":"f26b8c241d902673f1a71190d38d9ffa3287c1259edb38a51d95b3dd0d37d326"} Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.226629 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-thanos-querier-tls\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.226940 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.226989 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/072be099-fcf5-4664-bddf-b189ebfa04b1-metrics-client-ca\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.227012 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsgvl\" (UniqueName: \"kubernetes.io/projected/072be099-fcf5-4664-bddf-b189ebfa04b1-kube-api-access-xsgvl\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.227036 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.227058 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-grpc-tls\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.227081 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.227142 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.230163 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/072be099-fcf5-4664-bddf-b189ebfa04b1-metrics-client-ca\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.231981 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.232033 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-thanos-querier-tls\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.232739 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.234987 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-grpc-tls\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.237435 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.237817 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/072be099-fcf5-4664-bddf-b189ebfa04b1-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.242623 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsgvl\" (UniqueName: \"kubernetes.io/projected/072be099-fcf5-4664-bddf-b189ebfa04b1-kube-api-access-xsgvl\") pod \"thanos-querier-66bf6f75c8-v825n\" (UID: \"072be099-fcf5-4664-bddf-b189ebfa04b1\") " pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:15:59 crc kubenswrapper[4792]: I1127 17:15:59.421369 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:16:00 crc kubenswrapper[4792]: I1127 17:16:00.867628 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-66bf6f75c8-v825n"] Nov 27 17:16:00 crc kubenswrapper[4792]: W1127 17:16:00.888872 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod072be099_fcf5_4664_bddf_b189ebfa04b1.slice/crio-919ec607b90ba23fee37fca871c62f08d755ec96625c778b907d4466c0a1fdad WatchSource:0}: Error finding container 919ec607b90ba23fee37fca871c62f08d755ec96625c778b907d4466c0a1fdad: Status 404 returned error can't find the container with id 919ec607b90ba23fee37fca871c62f08d755ec96625c778b907d4466c0a1fdad Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.167721 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" event={"ID":"1bd0979d-914c-4abe-a0bf-aab589347139","Type":"ContainerStarted","Data":"b423fd3084f0ba99355c6d838372391beb2871e950d33f4bbf5b9d44d037112c"} Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.167764 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" event={"ID":"1bd0979d-914c-4abe-a0bf-aab589347139","Type":"ContainerStarted","Data":"2b36e891857d0454aa7776f6e9bf8aa0074b2fe83c2fd94664d9c99c7e3fdb8c"} Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.169330 4792 generic.go:334] "Generic (PLEG): container finished" podID="3bc2beae-626c-47d1-8133-0b2252a25791" containerID="3c7c6c00745a4e8c37a61f2de33bec3bb315a44068f798b4eeb358baceafaa5e" exitCode=0 Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.169378 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3bc2beae-626c-47d1-8133-0b2252a25791","Type":"ContainerDied","Data":"3c7c6c00745a4e8c37a61f2de33bec3bb315a44068f798b4eeb358baceafaa5e"} Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.171608 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7zcts" event={"ID":"96621422-2758-47a9-bd86-2d31ce0e4cf0","Type":"ContainerStarted","Data":"6c1f2cc5d02c272632c79657904219aacb2f796c7fc68da92b3cd0a896bc6998"} Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.173923 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" event={"ID":"b259275d-d7e5-4973-83d5-e0df6620cf1b","Type":"ContainerStarted","Data":"dfc801a297419cd6c32b851e73ccabbf454c14d9b719976e2787e6cfbe023ff1"} Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.183152 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" event={"ID":"072be099-fcf5-4664-bddf-b189ebfa04b1","Type":"ContainerStarted","Data":"919ec607b90ba23fee37fca871c62f08d755ec96625c778b907d4466c0a1fdad"} Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.226786 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-8rwfc" podStartSLOduration=2.9037004509999997 podStartE2EDuration="5.226768895s" podCreationTimestamp="2025-11-27 17:15:56 +0000 UTC" firstStartedPulling="2025-11-27 17:15:58.15583623 +0000 UTC m=+380.498662548" lastFinishedPulling="2025-11-27 17:16:00.478904654 +0000 UTC m=+382.821730992" observedRunningTime="2025-11-27 17:16:01.226032957 +0000 UTC m=+383.568859285" watchObservedRunningTime="2025-11-27 17:16:01.226768895 +0000 UTC m=+383.569595213" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.836012 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-55499b9ddd-m2vw2"] Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.837529 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.849051 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55499b9ddd-m2vw2"] Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.872376 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-service-ca\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.872446 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-oauth-serving-cert\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.872491 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-config\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.872547 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-trusted-ca-bundle\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.872591 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-oauth-config\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.872618 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-serving-cert\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.872660 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcwh6\" (UniqueName: \"kubernetes.io/projected/24b0cb28-9f67-4c1d-9740-ef0de742f063-kube-api-access-bcwh6\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.977721 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-trusted-ca-bundle\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.977807 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-oauth-config\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.977831 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-serving-cert\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.977854 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcwh6\" (UniqueName: \"kubernetes.io/projected/24b0cb28-9f67-4c1d-9740-ef0de742f063-kube-api-access-bcwh6\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.977896 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-service-ca\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.977926 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-oauth-serving-cert\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.977962 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-config\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.979815 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-config\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.980968 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-oauth-serving-cert\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.981430 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-trusted-ca-bundle\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.982175 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-service-ca\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.986231 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-oauth-config\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:01 crc kubenswrapper[4792]: I1127 17:16:01.988580 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-serving-cert\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:01.999930 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcwh6\" (UniqueName: \"kubernetes.io/projected/24b0cb28-9f67-4c1d-9740-ef0de742f063-kube-api-access-bcwh6\") pod \"console-55499b9ddd-m2vw2\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.153299 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.190990 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" event={"ID":"1bd0979d-914c-4abe-a0bf-aab589347139","Type":"ContainerStarted","Data":"676a771f34c2ee6f352dbfc73c2d76d4c443c3ea22ff778959260f807dfb2744"} Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.192950 4792 generic.go:334] "Generic (PLEG): container finished" podID="96621422-2758-47a9-bd86-2d31ce0e4cf0" containerID="6c1f2cc5d02c272632c79657904219aacb2f796c7fc68da92b3cd0a896bc6998" exitCode=0 Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.194382 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7zcts" event={"ID":"96621422-2758-47a9-bd86-2d31ce0e4cf0","Type":"ContainerDied","Data":"6c1f2cc5d02c272632c79657904219aacb2f796c7fc68da92b3cd0a896bc6998"} Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.215140 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-774d554459-cbczd"] Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.215838 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.222615 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-dfvxr" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.222921 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.223032 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.223092 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-5d2th1eercn91" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.223178 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.223224 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.226848 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hdg6p" podStartSLOduration=3.088903733 podStartE2EDuration="6.226831231s" podCreationTimestamp="2025-11-27 17:15:56 +0000 UTC" firstStartedPulling="2025-11-27 17:15:57.765508165 +0000 UTC m=+380.108334483" lastFinishedPulling="2025-11-27 17:16:00.903435653 +0000 UTC m=+383.246261981" observedRunningTime="2025-11-27 17:16:02.214959151 +0000 UTC m=+384.557785469" watchObservedRunningTime="2025-11-27 17:16:02.226831231 +0000 UTC m=+384.569657549" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.249743 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-774d554459-cbczd"] Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.282533 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/96346464-23bb-44bf-95a9-48f1934a358f-audit-log\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.282681 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/96346464-23bb-44bf-95a9-48f1934a358f-secret-metrics-server-tls\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.282763 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96346464-23bb-44bf-95a9-48f1934a358f-client-ca-bundle\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.282871 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nvmr\" (UniqueName: \"kubernetes.io/projected/96346464-23bb-44bf-95a9-48f1934a358f-kube-api-access-7nvmr\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.283001 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/96346464-23bb-44bf-95a9-48f1934a358f-metrics-server-audit-profiles\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.283125 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/96346464-23bb-44bf-95a9-48f1934a358f-secret-metrics-client-certs\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.283234 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96346464-23bb-44bf-95a9-48f1934a358f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.384568 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/96346464-23bb-44bf-95a9-48f1934a358f-secret-metrics-server-tls\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.385092 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96346464-23bb-44bf-95a9-48f1934a358f-client-ca-bundle\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.385131 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nvmr\" (UniqueName: \"kubernetes.io/projected/96346464-23bb-44bf-95a9-48f1934a358f-kube-api-access-7nvmr\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.385177 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/96346464-23bb-44bf-95a9-48f1934a358f-metrics-server-audit-profiles\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.385210 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/96346464-23bb-44bf-95a9-48f1934a358f-secret-metrics-client-certs\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.385250 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96346464-23bb-44bf-95a9-48f1934a358f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.385293 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/96346464-23bb-44bf-95a9-48f1934a358f-audit-log\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.385817 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/96346464-23bb-44bf-95a9-48f1934a358f-audit-log\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.386671 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/96346464-23bb-44bf-95a9-48f1934a358f-metrics-server-audit-profiles\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.386716 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96346464-23bb-44bf-95a9-48f1934a358f-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.392612 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96346464-23bb-44bf-95a9-48f1934a358f-client-ca-bundle\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.390711 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/96346464-23bb-44bf-95a9-48f1934a358f-secret-metrics-client-certs\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.393603 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/96346464-23bb-44bf-95a9-48f1934a358f-secret-metrics-server-tls\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.403995 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nvmr\" (UniqueName: \"kubernetes.io/projected/96346464-23bb-44bf-95a9-48f1934a358f-kube-api-access-7nvmr\") pod \"metrics-server-774d554459-cbczd\" (UID: \"96346464-23bb-44bf-95a9-48f1934a358f\") " pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.536654 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.593894 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55499b9ddd-m2vw2"] Nov 27 17:16:02 crc kubenswrapper[4792]: W1127 17:16:02.607679 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b0cb28_9f67_4c1d_9740_ef0de742f063.slice/crio-042a37f53bf2c262bae42344a4008ac9862b3c8f64a903eae137b66720e355ce WatchSource:0}: Error finding container 042a37f53bf2c262bae42344a4008ac9862b3c8f64a903eae137b66720e355ce: Status 404 returned error can't find the container with id 042a37f53bf2c262bae42344a4008ac9862b3c8f64a903eae137b66720e355ce Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.790882 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-66d75469f5-r5kh4"] Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.792167 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-66d75469f5-r5kh4" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.796285 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.796343 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.800925 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-66d75469f5-r5kh4"] Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.891777 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/be27a54c-4d89-4d48-ae28-19592e3aa985-monitoring-plugin-cert\") pod \"monitoring-plugin-66d75469f5-r5kh4\" (UID: \"be27a54c-4d89-4d48-ae28-19592e3aa985\") " pod="openshift-monitoring/monitoring-plugin-66d75469f5-r5kh4" Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.974997 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-774d554459-cbczd"] Nov 27 17:16:02 crc kubenswrapper[4792]: I1127 17:16:02.994335 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/be27a54c-4d89-4d48-ae28-19592e3aa985-monitoring-plugin-cert\") pod \"monitoring-plugin-66d75469f5-r5kh4\" (UID: \"be27a54c-4d89-4d48-ae28-19592e3aa985\") " pod="openshift-monitoring/monitoring-plugin-66d75469f5-r5kh4" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.005900 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/be27a54c-4d89-4d48-ae28-19592e3aa985-monitoring-plugin-cert\") pod \"monitoring-plugin-66d75469f5-r5kh4\" (UID: \"be27a54c-4d89-4d48-ae28-19592e3aa985\") " pod="openshift-monitoring/monitoring-plugin-66d75469f5-r5kh4" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.115142 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-66d75469f5-r5kh4" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.203272 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55499b9ddd-m2vw2" event={"ID":"24b0cb28-9f67-4c1d-9740-ef0de742f063","Type":"ContainerStarted","Data":"13f72fa5770de9c7475feeee6f1aeb0204dbc3d6a0e1f8889fade627d2a8516b"} Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.203320 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55499b9ddd-m2vw2" event={"ID":"24b0cb28-9f67-4c1d-9740-ef0de742f063","Type":"ContainerStarted","Data":"042a37f53bf2c262bae42344a4008ac9862b3c8f64a903eae137b66720e355ce"} Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.206617 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7zcts" event={"ID":"96621422-2758-47a9-bd86-2d31ce0e4cf0","Type":"ContainerStarted","Data":"46129583f0b42afe45fdc7b1eba7ff449136cdd8b9f990b664dbf3ca5e140196"} Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.206656 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-7zcts" event={"ID":"96621422-2758-47a9-bd86-2d31ce0e4cf0","Type":"ContainerStarted","Data":"9e91f0ca7c5110f025eaab09323c8d719b04442a3d51dc1da93dd193ffe3499b"} Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.242752 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-7zcts" podStartSLOduration=3.767620071 podStartE2EDuration="6.242737168s" podCreationTimestamp="2025-11-27 17:15:57 +0000 UTC" firstStartedPulling="2025-11-27 17:15:57.994835791 +0000 UTC m=+380.337662109" lastFinishedPulling="2025-11-27 17:16:00.469952888 +0000 UTC m=+382.812779206" observedRunningTime="2025-11-27 17:16:03.241841245 +0000 UTC m=+385.584667563" watchObservedRunningTime="2025-11-27 17:16:03.242737168 +0000 UTC m=+385.585563486" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.244795 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-55499b9ddd-m2vw2" podStartSLOduration=2.2447882200000002 podStartE2EDuration="2.24478822s" podCreationTimestamp="2025-11-27 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:16:03.22462755 +0000 UTC m=+385.567453888" watchObservedRunningTime="2025-11-27 17:16:03.24478822 +0000 UTC m=+385.587614538" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.355005 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.357992 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.362433 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.362671 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.362814 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.362982 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.364204 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.364435 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.365687 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.365911 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.369925 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.369981 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-4fn1eujckbri7" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.375124 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-l4h6w" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.375215 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.384428 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.385964 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403209 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-web-config\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403246 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403286 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403320 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403351 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403381 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403403 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403427 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403455 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403481 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403504 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg68g\" (UniqueName: \"kubernetes.io/projected/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-kube-api-access-sg68g\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403531 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403556 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403573 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403614 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-config-out\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403655 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403680 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.403702 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-config\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506457 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506530 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506559 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg68g\" (UniqueName: \"kubernetes.io/projected/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-kube-api-access-sg68g\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506588 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506619 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506661 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506713 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-config-out\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506764 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506789 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506812 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-config\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506837 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-web-config\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506862 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506892 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506924 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506956 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.506983 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.507006 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.507025 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.508211 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.508398 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.508954 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.510800 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.513781 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.514325 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-config\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.516471 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.517848 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.518509 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.518923 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-web-config\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.519213 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.519441 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-config-out\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.523198 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.523426 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.523578 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.523629 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg68g\" (UniqueName: \"kubernetes.io/projected/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-kube-api-access-sg68g\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.523687 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.525265 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:03 crc kubenswrapper[4792]: I1127 17:16:03.736685 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:05 crc kubenswrapper[4792]: I1127 17:16:05.035273 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 27 17:16:05 crc kubenswrapper[4792]: I1127 17:16:05.096713 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-66d75469f5-r5kh4"] Nov 27 17:16:05 crc kubenswrapper[4792]: W1127 17:16:05.101831 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe27a54c_4d89_4d48_ae28_19592e3aa985.slice/crio-d070f081d4ac6a2994ebcb6f9eb8332c608a6a9f6862a4c0f5fa31ecbce72f76 WatchSource:0}: Error finding container d070f081d4ac6a2994ebcb6f9eb8332c608a6a9f6862a4c0f5fa31ecbce72f76: Status 404 returned error can't find the container with id d070f081d4ac6a2994ebcb6f9eb8332c608a6a9f6862a4c0f5fa31ecbce72f76 Nov 27 17:16:05 crc kubenswrapper[4792]: I1127 17:16:05.222623 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d","Type":"ContainerStarted","Data":"c15491c702f21f6e6c6c93d156068d348486ce5c1636b7dace8acf4413e10eea"} Nov 27 17:16:05 crc kubenswrapper[4792]: I1127 17:16:05.222723 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d","Type":"ContainerStarted","Data":"7919f6e254086b6779aba76bfa1cbcd1d237588dce4024bb24dbb30d039cc220"} Nov 27 17:16:05 crc kubenswrapper[4792]: I1127 17:16:05.226375 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3bc2beae-626c-47d1-8133-0b2252a25791","Type":"ContainerStarted","Data":"647a6bf14a12162c12d4c0f1d41848c1cf2f80ad91524ab78fe9e4645d83950a"} Nov 27 17:16:05 crc kubenswrapper[4792]: I1127 17:16:05.226494 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3bc2beae-626c-47d1-8133-0b2252a25791","Type":"ContainerStarted","Data":"d79891e28482d654e9477f14e1ab56e59088b2467e34a7cbca0cac2f03787794"} Nov 27 17:16:05 crc kubenswrapper[4792]: I1127 17:16:05.226569 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3bc2beae-626c-47d1-8133-0b2252a25791","Type":"ContainerStarted","Data":"593bc8f28e3b8e13eadeaa6dd0affc05b5b4e7dda3546009f2af4cc178441c7c"} Nov 27 17:16:05 crc kubenswrapper[4792]: I1127 17:16:05.228889 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-774d554459-cbczd" event={"ID":"96346464-23bb-44bf-95a9-48f1934a358f","Type":"ContainerStarted","Data":"cb702bc587a7a3b63507c4f686d0943c1eb4cb18ef974b6441c059804a2011c4"} Nov 27 17:16:05 crc kubenswrapper[4792]: I1127 17:16:05.238829 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" event={"ID":"072be099-fcf5-4664-bddf-b189ebfa04b1","Type":"ContainerStarted","Data":"8b1e345a5aa7b39312667ad4176ac3ed0bbbcef3e25d83d42b9985fc9b5d5b1c"} Nov 27 17:16:05 crc kubenswrapper[4792]: I1127 17:16:05.238879 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" event={"ID":"072be099-fcf5-4664-bddf-b189ebfa04b1","Type":"ContainerStarted","Data":"33c135b541029415c25d7cd53da4d2ffe444c90a9afc7bf65e45ae856bcdbd46"} Nov 27 17:16:05 crc kubenswrapper[4792]: I1127 17:16:05.238894 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" event={"ID":"072be099-fcf5-4664-bddf-b189ebfa04b1","Type":"ContainerStarted","Data":"797c22d2134cef0127972699b67a1c165de7f9fb7c6498d6f5fb5f8ab058557b"} Nov 27 17:16:05 crc kubenswrapper[4792]: I1127 17:16:05.240281 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-66d75469f5-r5kh4" event={"ID":"be27a54c-4d89-4d48-ae28-19592e3aa985","Type":"ContainerStarted","Data":"d070f081d4ac6a2994ebcb6f9eb8332c608a6a9f6862a4c0f5fa31ecbce72f76"} Nov 27 17:16:06 crc kubenswrapper[4792]: I1127 17:16:06.247190 4792 generic.go:334] "Generic (PLEG): container finished" podID="bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d" containerID="c15491c702f21f6e6c6c93d156068d348486ce5c1636b7dace8acf4413e10eea" exitCode=0 Nov 27 17:16:06 crc kubenswrapper[4792]: I1127 17:16:06.247232 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d","Type":"ContainerDied","Data":"c15491c702f21f6e6c6c93d156068d348486ce5c1636b7dace8acf4413e10eea"} Nov 27 17:16:06 crc kubenswrapper[4792]: I1127 17:16:06.250472 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3bc2beae-626c-47d1-8133-0b2252a25791","Type":"ContainerStarted","Data":"6493cfc0a01f63f62adf6f6c648e4e33d6a093dceb27df05a37841360958652f"} Nov 27 17:16:06 crc kubenswrapper[4792]: I1127 17:16:06.250497 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3bc2beae-626c-47d1-8133-0b2252a25791","Type":"ContainerStarted","Data":"7ed0ab764d7835e437a630f6ad815f90ebc2e57574cae06a3550ea3b19c3d925"} Nov 27 17:16:07 crc kubenswrapper[4792]: I1127 17:16:07.258497 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-66d75469f5-r5kh4" event={"ID":"be27a54c-4d89-4d48-ae28-19592e3aa985","Type":"ContainerStarted","Data":"df45baa1f411e60333acc56416d264b25ab3f8b8ace568831f8b5373005acea3"} Nov 27 17:16:07 crc kubenswrapper[4792]: I1127 17:16:07.259835 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-66d75469f5-r5kh4" Nov 27 17:16:07 crc kubenswrapper[4792]: I1127 17:16:07.261694 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-774d554459-cbczd" event={"ID":"96346464-23bb-44bf-95a9-48f1934a358f","Type":"ContainerStarted","Data":"c1a3aaaa0c5722f67de510293725f6a84715724e3630c92f0c95e679db6cc942"} Nov 27 17:16:07 crc kubenswrapper[4792]: I1127 17:16:07.265457 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-66d75469f5-r5kh4" Nov 27 17:16:07 crc kubenswrapper[4792]: I1127 17:16:07.285866 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-66d75469f5-r5kh4" podStartSLOduration=3.612288326 podStartE2EDuration="5.285841244s" podCreationTimestamp="2025-11-27 17:16:02 +0000 UTC" firstStartedPulling="2025-11-27 17:16:05.105004255 +0000 UTC m=+387.447830573" lastFinishedPulling="2025-11-27 17:16:06.778557163 +0000 UTC m=+389.121383491" observedRunningTime="2025-11-27 17:16:07.28169856 +0000 UTC m=+389.624524908" watchObservedRunningTime="2025-11-27 17:16:07.285841244 +0000 UTC m=+389.628667562" Nov 27 17:16:07 crc kubenswrapper[4792]: I1127 17:16:07.301242 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-774d554459-cbczd" podStartSLOduration=3.013959553 podStartE2EDuration="5.301220643s" podCreationTimestamp="2025-11-27 17:16:02 +0000 UTC" firstStartedPulling="2025-11-27 17:16:04.489553259 +0000 UTC m=+386.832379577" lastFinishedPulling="2025-11-27 17:16:06.776814359 +0000 UTC m=+389.119640667" observedRunningTime="2025-11-27 17:16:07.296925734 +0000 UTC m=+389.639752052" watchObservedRunningTime="2025-11-27 17:16:07.301220643 +0000 UTC m=+389.644046961" Nov 27 17:16:08 crc kubenswrapper[4792]: I1127 17:16:08.280058 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" event={"ID":"072be099-fcf5-4664-bddf-b189ebfa04b1","Type":"ContainerStarted","Data":"e0ecce3f92fba30070aeeeffb99f645b58314ae3b33bef39021189fa6b06b644"} Nov 27 17:16:08 crc kubenswrapper[4792]: I1127 17:16:08.290021 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:16:08 crc kubenswrapper[4792]: I1127 17:16:08.290088 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:16:09 crc kubenswrapper[4792]: I1127 17:16:09.289409 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" event={"ID":"072be099-fcf5-4664-bddf-b189ebfa04b1","Type":"ContainerStarted","Data":"b3e96a2a01c9781205dbde635a1c4dcfed28cb1c1d6453d4a3e73324a142152f"} Nov 27 17:16:09 crc kubenswrapper[4792]: I1127 17:16:09.293042 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3bc2beae-626c-47d1-8133-0b2252a25791","Type":"ContainerStarted","Data":"07587e408cfd2921c45f69c842608f6f12424ab1659c093aed94b064e171d17b"} Nov 27 17:16:09 crc kubenswrapper[4792]: I1127 17:16:09.325629 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=1.962166533 podStartE2EDuration="11.325608837s" podCreationTimestamp="2025-11-27 17:15:58 +0000 UTC" firstStartedPulling="2025-11-27 17:15:58.895414892 +0000 UTC m=+381.238241210" lastFinishedPulling="2025-11-27 17:16:08.258857196 +0000 UTC m=+390.601683514" observedRunningTime="2025-11-27 17:16:09.320756055 +0000 UTC m=+391.663582393" watchObservedRunningTime="2025-11-27 17:16:09.325608837 +0000 UTC m=+391.668435155" Nov 27 17:16:10 crc kubenswrapper[4792]: I1127 17:16:10.302406 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" event={"ID":"072be099-fcf5-4664-bddf-b189ebfa04b1","Type":"ContainerStarted","Data":"b27b2d47246b2645890965f6d0fb8590d875b3decf9df5d996710de7a1d42f40"} Nov 27 17:16:10 crc kubenswrapper[4792]: I1127 17:16:10.303964 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:16:10 crc kubenswrapper[4792]: I1127 17:16:10.304803 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d","Type":"ContainerStarted","Data":"7fbd453eb73abe3de3e3bcdd23a3abaaa019da99437012ec5ac62b3c0b7da699"} Nov 27 17:16:10 crc kubenswrapper[4792]: I1127 17:16:10.304878 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d","Type":"ContainerStarted","Data":"3420700de14f489a0b67d5c1472862c6c8aa55ebe2feee7d4a5e1616acea4355"} Nov 27 17:16:10 crc kubenswrapper[4792]: I1127 17:16:10.349744 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" Nov 27 17:16:10 crc kubenswrapper[4792]: I1127 17:16:10.384480 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-66bf6f75c8-v825n" podStartSLOduration=4.251386076 podStartE2EDuration="11.384450548s" podCreationTimestamp="2025-11-27 17:15:59 +0000 UTC" firstStartedPulling="2025-11-27 17:16:00.891393729 +0000 UTC m=+383.234220057" lastFinishedPulling="2025-11-27 17:16:08.024458211 +0000 UTC m=+390.367284529" observedRunningTime="2025-11-27 17:16:10.328997812 +0000 UTC m=+392.671824130" watchObservedRunningTime="2025-11-27 17:16:10.384450548 +0000 UTC m=+392.727276866" Nov 27 17:16:11 crc kubenswrapper[4792]: I1127 17:16:11.317404 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d","Type":"ContainerStarted","Data":"eafeccddd7f1990e86c08a945ab69ae2ad5efa76f130920e698a883dfcf10fbe"} Nov 27 17:16:11 crc kubenswrapper[4792]: I1127 17:16:11.317859 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d","Type":"ContainerStarted","Data":"c4417e3292cbb025050451816b90a9c522c6f97b2f13f3aff9b9e5e6baaf09a6"} Nov 27 17:16:11 crc kubenswrapper[4792]: I1127 17:16:11.317881 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d","Type":"ContainerStarted","Data":"3f7e2ce09a1d7773b8cb7c4572abceec3a1275977029c34ab00aac94367310da"} Nov 27 17:16:11 crc kubenswrapper[4792]: I1127 17:16:11.317900 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"bf07c8d1-cf7f-4882-8ce3-6162d0ddee2d","Type":"ContainerStarted","Data":"b76c07c55f2b569668b1805cf820ae1aa21641c65e042c7cc9bb2af8d4ba8ace"} Nov 27 17:16:11 crc kubenswrapper[4792]: I1127 17:16:11.354456 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.726920755 podStartE2EDuration="8.354430997s" podCreationTimestamp="2025-11-27 17:16:03 +0000 UTC" firstStartedPulling="2025-11-27 17:16:06.248516766 +0000 UTC m=+388.591343084" lastFinishedPulling="2025-11-27 17:16:09.876027008 +0000 UTC m=+392.218853326" observedRunningTime="2025-11-27 17:16:11.352098989 +0000 UTC m=+393.694925337" watchObservedRunningTime="2025-11-27 17:16:11.354430997 +0000 UTC m=+393.697257325" Nov 27 17:16:12 crc kubenswrapper[4792]: I1127 17:16:12.153870 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:12 crc kubenswrapper[4792]: I1127 17:16:12.153917 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:12 crc kubenswrapper[4792]: I1127 17:16:12.160688 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:12 crc kubenswrapper[4792]: I1127 17:16:12.332795 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:16:12 crc kubenswrapper[4792]: I1127 17:16:12.412364 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-k86pd"] Nov 27 17:16:13 crc kubenswrapper[4792]: I1127 17:16:13.737478 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:16:22 crc kubenswrapper[4792]: I1127 17:16:22.537453 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:22 crc kubenswrapper[4792]: I1127 17:16:22.538510 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:37 crc kubenswrapper[4792]: I1127 17:16:37.479673 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-k86pd" podUID="93d84de9-e75f-4127-b3ee-890375498dc3" containerName="console" containerID="cri-o://84c5c48aac089760cb61a81051db5633b72ff8c41bc5fd37282c4563d85bfc51" gracePeriod=15 Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.000391 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-k86pd_93d84de9-e75f-4127-b3ee-890375498dc3/console/0.log" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.000766 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.056953 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-console-config\") pod \"93d84de9-e75f-4127-b3ee-890375498dc3\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.057058 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq47b\" (UniqueName: \"kubernetes.io/projected/93d84de9-e75f-4127-b3ee-890375498dc3-kube-api-access-dq47b\") pod \"93d84de9-e75f-4127-b3ee-890375498dc3\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.057108 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-service-ca\") pod \"93d84de9-e75f-4127-b3ee-890375498dc3\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.057149 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/93d84de9-e75f-4127-b3ee-890375498dc3-console-oauth-config\") pod \"93d84de9-e75f-4127-b3ee-890375498dc3\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.057196 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/93d84de9-e75f-4127-b3ee-890375498dc3-console-serving-cert\") pod \"93d84de9-e75f-4127-b3ee-890375498dc3\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.057225 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-oauth-serving-cert\") pod \"93d84de9-e75f-4127-b3ee-890375498dc3\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.057255 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-trusted-ca-bundle\") pod \"93d84de9-e75f-4127-b3ee-890375498dc3\" (UID: \"93d84de9-e75f-4127-b3ee-890375498dc3\") " Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.057758 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-console-config" (OuterVolumeSpecName: "console-config") pod "93d84de9-e75f-4127-b3ee-890375498dc3" (UID: "93d84de9-e75f-4127-b3ee-890375498dc3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.057780 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-service-ca" (OuterVolumeSpecName: "service-ca") pod "93d84de9-e75f-4127-b3ee-890375498dc3" (UID: "93d84de9-e75f-4127-b3ee-890375498dc3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.057802 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "93d84de9-e75f-4127-b3ee-890375498dc3" (UID: "93d84de9-e75f-4127-b3ee-890375498dc3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.057859 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "93d84de9-e75f-4127-b3ee-890375498dc3" (UID: "93d84de9-e75f-4127-b3ee-890375498dc3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.062661 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d84de9-e75f-4127-b3ee-890375498dc3-kube-api-access-dq47b" (OuterVolumeSpecName: "kube-api-access-dq47b") pod "93d84de9-e75f-4127-b3ee-890375498dc3" (UID: "93d84de9-e75f-4127-b3ee-890375498dc3"). InnerVolumeSpecName "kube-api-access-dq47b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.063268 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d84de9-e75f-4127-b3ee-890375498dc3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "93d84de9-e75f-4127-b3ee-890375498dc3" (UID: "93d84de9-e75f-4127-b3ee-890375498dc3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.063376 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d84de9-e75f-4127-b3ee-890375498dc3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "93d84de9-e75f-4127-b3ee-890375498dc3" (UID: "93d84de9-e75f-4127-b3ee-890375498dc3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.159957 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.160017 4792 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/93d84de9-e75f-4127-b3ee-890375498dc3-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.160040 4792 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/93d84de9-e75f-4127-b3ee-890375498dc3-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.160059 4792 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.160078 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.160096 4792 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/93d84de9-e75f-4127-b3ee-890375498dc3-console-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.160114 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq47b\" (UniqueName: \"kubernetes.io/projected/93d84de9-e75f-4127-b3ee-890375498dc3-kube-api-access-dq47b\") on node \"crc\" DevicePath \"\"" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.290869 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.290944 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.290998 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.291746 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"07733d4a5a2764e892f619318dddf4bb5833d9d78d072e993f8c20fe552da65d"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.291844 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://07733d4a5a2764e892f619318dddf4bb5833d9d78d072e993f8c20fe552da65d" gracePeriod=600 Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.527832 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="07733d4a5a2764e892f619318dddf4bb5833d9d78d072e993f8c20fe552da65d" exitCode=0 Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.527957 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"07733d4a5a2764e892f619318dddf4bb5833d9d78d072e993f8c20fe552da65d"} Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.528051 4792 scope.go:117] "RemoveContainer" containerID="5896f01a51aa47e05d0b7abd9a205e2a41d6392f183edf3543b79f1ca57bef59" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.531548 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-k86pd_93d84de9-e75f-4127-b3ee-890375498dc3/console/0.log" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.531597 4792 generic.go:334] "Generic (PLEG): container finished" podID="93d84de9-e75f-4127-b3ee-890375498dc3" containerID="84c5c48aac089760cb61a81051db5633b72ff8c41bc5fd37282c4563d85bfc51" exitCode=2 Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.531626 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-k86pd" event={"ID":"93d84de9-e75f-4127-b3ee-890375498dc3","Type":"ContainerDied","Data":"84c5c48aac089760cb61a81051db5633b72ff8c41bc5fd37282c4563d85bfc51"} Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.531710 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-k86pd" event={"ID":"93d84de9-e75f-4127-b3ee-890375498dc3","Type":"ContainerDied","Data":"c73e6b99baefd2c9ed97ff23b12c864b3580d34ac9c781cfb4bc42376b062b80"} Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.531754 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-k86pd" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.569350 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-k86pd"] Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.573674 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-k86pd"] Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.663970 4792 scope.go:117] "RemoveContainer" containerID="84c5c48aac089760cb61a81051db5633b72ff8c41bc5fd37282c4563d85bfc51" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.679282 4792 scope.go:117] "RemoveContainer" containerID="84c5c48aac089760cb61a81051db5633b72ff8c41bc5fd37282c4563d85bfc51" Nov 27 17:16:38 crc kubenswrapper[4792]: E1127 17:16:38.679744 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84c5c48aac089760cb61a81051db5633b72ff8c41bc5fd37282c4563d85bfc51\": container with ID starting with 84c5c48aac089760cb61a81051db5633b72ff8c41bc5fd37282c4563d85bfc51 not found: ID does not exist" containerID="84c5c48aac089760cb61a81051db5633b72ff8c41bc5fd37282c4563d85bfc51" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.679787 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c5c48aac089760cb61a81051db5633b72ff8c41bc5fd37282c4563d85bfc51"} err="failed to get container status \"84c5c48aac089760cb61a81051db5633b72ff8c41bc5fd37282c4563d85bfc51\": rpc error: code = NotFound desc = could not find container \"84c5c48aac089760cb61a81051db5633b72ff8c41bc5fd37282c4563d85bfc51\": container with ID starting with 84c5c48aac089760cb61a81051db5633b72ff8c41bc5fd37282c4563d85bfc51 not found: ID does not exist" Nov 27 17:16:38 crc kubenswrapper[4792]: I1127 17:16:38.694277 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d84de9-e75f-4127-b3ee-890375498dc3" path="/var/lib/kubelet/pods/93d84de9-e75f-4127-b3ee-890375498dc3/volumes" Nov 27 17:16:39 crc kubenswrapper[4792]: I1127 17:16:39.541693 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"e077930b952b5bb442db4d4d9a23e8530f27542b022a402bd9965e40a6267099"} Nov 27 17:16:42 crc kubenswrapper[4792]: I1127 17:16:42.551533 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:16:42 crc kubenswrapper[4792]: I1127 17:16:42.556413 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-774d554459-cbczd" Nov 27 17:17:03 crc kubenswrapper[4792]: I1127 17:17:03.737390 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:17:03 crc kubenswrapper[4792]: I1127 17:17:03.783890 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:17:04 crc kubenswrapper[4792]: I1127 17:17:04.765200 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.410823 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-54b4d94fcb-zr2dd"] Nov 27 17:17:15 crc kubenswrapper[4792]: E1127 17:17:15.411803 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d84de9-e75f-4127-b3ee-890375498dc3" containerName="console" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.411824 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d84de9-e75f-4127-b3ee-890375498dc3" containerName="console" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.411970 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d84de9-e75f-4127-b3ee-890375498dc3" containerName="console" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.412507 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.424559 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-54b4d94fcb-zr2dd"] Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.545211 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-oauth-serving-cert\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.545256 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-oauth-config\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.545292 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-serving-cert\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.545331 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-trusted-ca-bundle\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.545379 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55b6z\" (UniqueName: \"kubernetes.io/projected/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-kube-api-access-55b6z\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.545402 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-config\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.545419 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-service-ca\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.646843 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-oauth-serving-cert\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.646914 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-oauth-config\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.646941 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-serving-cert\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.646993 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-trusted-ca-bundle\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.647063 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55b6z\" (UniqueName: \"kubernetes.io/projected/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-kube-api-access-55b6z\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.647100 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-config\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.647123 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-service-ca\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.648105 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-service-ca\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.648108 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-oauth-serving-cert\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.648782 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-config\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.650240 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-trusted-ca-bundle\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.653849 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-oauth-config\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.653895 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-serving-cert\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.679545 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55b6z\" (UniqueName: \"kubernetes.io/projected/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-kube-api-access-55b6z\") pod \"console-54b4d94fcb-zr2dd\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:15 crc kubenswrapper[4792]: I1127 17:17:15.734301 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:16 crc kubenswrapper[4792]: I1127 17:17:16.164998 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-54b4d94fcb-zr2dd"] Nov 27 17:17:16 crc kubenswrapper[4792]: I1127 17:17:16.809762 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54b4d94fcb-zr2dd" event={"ID":"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb","Type":"ContainerStarted","Data":"3bbfda025554927cbf5845664c839ee0dbe6c1f7c1725e427425f63342ff2f18"} Nov 27 17:17:16 crc kubenswrapper[4792]: I1127 17:17:16.809860 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54b4d94fcb-zr2dd" event={"ID":"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb","Type":"ContainerStarted","Data":"664954eaa21211e4781a327c5eadd3d68a41b314f720e945a90d4367bdfcf6e8"} Nov 27 17:17:16 crc kubenswrapper[4792]: I1127 17:17:16.843386 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-54b4d94fcb-zr2dd" podStartSLOduration=1.843366826 podStartE2EDuration="1.843366826s" podCreationTimestamp="2025-11-27 17:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:17:16.842889844 +0000 UTC m=+459.185716172" watchObservedRunningTime="2025-11-27 17:17:16.843366826 +0000 UTC m=+459.186193154" Nov 27 17:17:25 crc kubenswrapper[4792]: I1127 17:17:25.735584 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:25 crc kubenswrapper[4792]: I1127 17:17:25.735968 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:25 crc kubenswrapper[4792]: I1127 17:17:25.740319 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:25 crc kubenswrapper[4792]: I1127 17:17:25.869080 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:17:25 crc kubenswrapper[4792]: I1127 17:17:25.927561 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-55499b9ddd-m2vw2"] Nov 27 17:17:50 crc kubenswrapper[4792]: I1127 17:17:50.972325 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-55499b9ddd-m2vw2" podUID="24b0cb28-9f67-4c1d-9740-ef0de742f063" containerName="console" containerID="cri-o://13f72fa5770de9c7475feeee6f1aeb0204dbc3d6a0e1f8889fade627d2a8516b" gracePeriod=15 Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.354156 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55499b9ddd-m2vw2_24b0cb28-9f67-4c1d-9740-ef0de742f063/console/0.log" Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.354467 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.427384 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-service-ca\") pod \"24b0cb28-9f67-4c1d-9740-ef0de742f063\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.427479 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-trusted-ca-bundle\") pod \"24b0cb28-9f67-4c1d-9740-ef0de742f063\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.427506 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-oauth-serving-cert\") pod \"24b0cb28-9f67-4c1d-9740-ef0de742f063\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.427566 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-serving-cert\") pod \"24b0cb28-9f67-4c1d-9740-ef0de742f063\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.427598 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcwh6\" (UniqueName: \"kubernetes.io/projected/24b0cb28-9f67-4c1d-9740-ef0de742f063-kube-api-access-bcwh6\") pod \"24b0cb28-9f67-4c1d-9740-ef0de742f063\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.427620 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-config\") pod \"24b0cb28-9f67-4c1d-9740-ef0de742f063\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.427655 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-oauth-config\") pod \"24b0cb28-9f67-4c1d-9740-ef0de742f063\" (UID: \"24b0cb28-9f67-4c1d-9740-ef0de742f063\") " Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.428965 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "24b0cb28-9f67-4c1d-9740-ef0de742f063" (UID: "24b0cb28-9f67-4c1d-9740-ef0de742f063"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.429551 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-config" (OuterVolumeSpecName: "console-config") pod "24b0cb28-9f67-4c1d-9740-ef0de742f063" (UID: "24b0cb28-9f67-4c1d-9740-ef0de742f063"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.429462 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-service-ca" (OuterVolumeSpecName: "service-ca") pod "24b0cb28-9f67-4c1d-9740-ef0de742f063" (UID: "24b0cb28-9f67-4c1d-9740-ef0de742f063"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.429676 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "24b0cb28-9f67-4c1d-9740-ef0de742f063" (UID: "24b0cb28-9f67-4c1d-9740-ef0de742f063"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.432434 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "24b0cb28-9f67-4c1d-9740-ef0de742f063" (UID: "24b0cb28-9f67-4c1d-9740-ef0de742f063"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.432501 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "24b0cb28-9f67-4c1d-9740-ef0de742f063" (UID: "24b0cb28-9f67-4c1d-9740-ef0de742f063"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.432544 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b0cb28-9f67-4c1d-9740-ef0de742f063-kube-api-access-bcwh6" (OuterVolumeSpecName: "kube-api-access-bcwh6") pod "24b0cb28-9f67-4c1d-9740-ef0de742f063" (UID: "24b0cb28-9f67-4c1d-9740-ef0de742f063"). InnerVolumeSpecName "kube-api-access-bcwh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.529013 4792 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.529044 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcwh6\" (UniqueName: \"kubernetes.io/projected/24b0cb28-9f67-4c1d-9740-ef0de742f063-kube-api-access-bcwh6\") on node \"crc\" DevicePath \"\"" Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.529055 4792 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.529063 4792 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/24b0cb28-9f67-4c1d-9740-ef0de742f063-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.529205 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-service-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.529213 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:17:51 crc kubenswrapper[4792]: I1127 17:17:51.529272 4792 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/24b0cb28-9f67-4c1d-9740-ef0de742f063-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:17:52 crc kubenswrapper[4792]: I1127 17:17:52.066458 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55499b9ddd-m2vw2_24b0cb28-9f67-4c1d-9740-ef0de742f063/console/0.log" Nov 27 17:17:52 crc kubenswrapper[4792]: I1127 17:17:52.066840 4792 generic.go:334] "Generic (PLEG): container finished" podID="24b0cb28-9f67-4c1d-9740-ef0de742f063" containerID="13f72fa5770de9c7475feeee6f1aeb0204dbc3d6a0e1f8889fade627d2a8516b" exitCode=2 Nov 27 17:17:52 crc kubenswrapper[4792]: I1127 17:17:52.066875 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55499b9ddd-m2vw2" event={"ID":"24b0cb28-9f67-4c1d-9740-ef0de742f063","Type":"ContainerDied","Data":"13f72fa5770de9c7475feeee6f1aeb0204dbc3d6a0e1f8889fade627d2a8516b"} Nov 27 17:17:52 crc kubenswrapper[4792]: I1127 17:17:52.066913 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55499b9ddd-m2vw2" event={"ID":"24b0cb28-9f67-4c1d-9740-ef0de742f063","Type":"ContainerDied","Data":"042a37f53bf2c262bae42344a4008ac9862b3c8f64a903eae137b66720e355ce"} Nov 27 17:17:52 crc kubenswrapper[4792]: I1127 17:17:52.066935 4792 scope.go:117] "RemoveContainer" containerID="13f72fa5770de9c7475feeee6f1aeb0204dbc3d6a0e1f8889fade627d2a8516b" Nov 27 17:17:52 crc kubenswrapper[4792]: I1127 17:17:52.067101 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55499b9ddd-m2vw2" Nov 27 17:17:52 crc kubenswrapper[4792]: I1127 17:17:52.092246 4792 scope.go:117] "RemoveContainer" containerID="13f72fa5770de9c7475feeee6f1aeb0204dbc3d6a0e1f8889fade627d2a8516b" Nov 27 17:17:52 crc kubenswrapper[4792]: E1127 17:17:52.093971 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13f72fa5770de9c7475feeee6f1aeb0204dbc3d6a0e1f8889fade627d2a8516b\": container with ID starting with 13f72fa5770de9c7475feeee6f1aeb0204dbc3d6a0e1f8889fade627d2a8516b not found: ID does not exist" containerID="13f72fa5770de9c7475feeee6f1aeb0204dbc3d6a0e1f8889fade627d2a8516b" Nov 27 17:17:52 crc kubenswrapper[4792]: I1127 17:17:52.094004 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13f72fa5770de9c7475feeee6f1aeb0204dbc3d6a0e1f8889fade627d2a8516b"} err="failed to get container status \"13f72fa5770de9c7475feeee6f1aeb0204dbc3d6a0e1f8889fade627d2a8516b\": rpc error: code = NotFound desc = could not find container \"13f72fa5770de9c7475feeee6f1aeb0204dbc3d6a0e1f8889fade627d2a8516b\": container with ID starting with 13f72fa5770de9c7475feeee6f1aeb0204dbc3d6a0e1f8889fade627d2a8516b not found: ID does not exist" Nov 27 17:17:52 crc kubenswrapper[4792]: I1127 17:17:52.099685 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-55499b9ddd-m2vw2"] Nov 27 17:17:52 crc kubenswrapper[4792]: I1127 17:17:52.103395 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-55499b9ddd-m2vw2"] Nov 27 17:17:52 crc kubenswrapper[4792]: I1127 17:17:52.693543 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b0cb28-9f67-4c1d-9740-ef0de742f063" path="/var/lib/kubelet/pods/24b0cb28-9f67-4c1d-9740-ef0de742f063/volumes" Nov 27 17:18:38 crc kubenswrapper[4792]: I1127 17:18:38.290305 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:18:38 crc kubenswrapper[4792]: I1127 17:18:38.291023 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:18:38 crc kubenswrapper[4792]: I1127 17:18:38.910607 4792 scope.go:117] "RemoveContainer" containerID="87e9295e4e3bcc62b762b3a76d0c30970089709dae92380fae80c102d079a7d3" Nov 27 17:19:08 crc kubenswrapper[4792]: I1127 17:19:08.290884 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:19:08 crc kubenswrapper[4792]: I1127 17:19:08.291748 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:19:38 crc kubenswrapper[4792]: I1127 17:19:38.290792 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:19:38 crc kubenswrapper[4792]: I1127 17:19:38.291521 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:19:38 crc kubenswrapper[4792]: I1127 17:19:38.291589 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:19:38 crc kubenswrapper[4792]: I1127 17:19:38.292390 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e077930b952b5bb442db4d4d9a23e8530f27542b022a402bd9965e40a6267099"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 17:19:38 crc kubenswrapper[4792]: I1127 17:19:38.292458 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://e077930b952b5bb442db4d4d9a23e8530f27542b022a402bd9965e40a6267099" gracePeriod=600 Nov 27 17:19:38 crc kubenswrapper[4792]: I1127 17:19:38.819336 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="e077930b952b5bb442db4d4d9a23e8530f27542b022a402bd9965e40a6267099" exitCode=0 Nov 27 17:19:38 crc kubenswrapper[4792]: I1127 17:19:38.819409 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"e077930b952b5bb442db4d4d9a23e8530f27542b022a402bd9965e40a6267099"} Nov 27 17:19:38 crc kubenswrapper[4792]: I1127 17:19:38.820863 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"9f01bf94bd55fb4aa5577fea4f28f3b654e0b34834b1e5c5ebc907510f5b8133"} Nov 27 17:19:38 crc kubenswrapper[4792]: I1127 17:19:38.820895 4792 scope.go:117] "RemoveContainer" containerID="07733d4a5a2764e892f619318dddf4bb5833d9d78d072e993f8c20fe552da65d" Nov 27 17:19:38 crc kubenswrapper[4792]: I1127 17:19:38.964204 4792 scope.go:117] "RemoveContainer" containerID="34699664bdfbb62220f3ff185873bd433e4793a7bf609a3c8cbdf0bad7b9e931" Nov 27 17:19:38 crc kubenswrapper[4792]: I1127 17:19:38.979475 4792 scope.go:117] "RemoveContainer" containerID="465484a0e5d91cb8402f4bbb87831740f0c289478266cfc27123c2dbd63bcc13" Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.378663 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4"] Nov 27 17:19:48 crc kubenswrapper[4792]: E1127 17:19:48.379369 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b0cb28-9f67-4c1d-9740-ef0de742f063" containerName="console" Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.379382 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b0cb28-9f67-4c1d-9740-ef0de742f063" containerName="console" Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.379499 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b0cb28-9f67-4c1d-9740-ef0de742f063" containerName="console" Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.380300 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.383005 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.392138 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4"] Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.510439 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a7fb352-ca11-42ed-9d3d-296e3747292f-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4\" (UID: \"6a7fb352-ca11-42ed-9d3d-296e3747292f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.510500 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ctpv\" (UniqueName: \"kubernetes.io/projected/6a7fb352-ca11-42ed-9d3d-296e3747292f-kube-api-access-7ctpv\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4\" (UID: \"6a7fb352-ca11-42ed-9d3d-296e3747292f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.510526 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a7fb352-ca11-42ed-9d3d-296e3747292f-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4\" (UID: \"6a7fb352-ca11-42ed-9d3d-296e3747292f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.612013 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a7fb352-ca11-42ed-9d3d-296e3747292f-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4\" (UID: \"6a7fb352-ca11-42ed-9d3d-296e3747292f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.612510 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ctpv\" (UniqueName: \"kubernetes.io/projected/6a7fb352-ca11-42ed-9d3d-296e3747292f-kube-api-access-7ctpv\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4\" (UID: \"6a7fb352-ca11-42ed-9d3d-296e3747292f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.612627 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a7fb352-ca11-42ed-9d3d-296e3747292f-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4\" (UID: \"6a7fb352-ca11-42ed-9d3d-296e3747292f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.612549 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a7fb352-ca11-42ed-9d3d-296e3747292f-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4\" (UID: \"6a7fb352-ca11-42ed-9d3d-296e3747292f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.613105 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a7fb352-ca11-42ed-9d3d-296e3747292f-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4\" (UID: \"6a7fb352-ca11-42ed-9d3d-296e3747292f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.630050 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ctpv\" (UniqueName: \"kubernetes.io/projected/6a7fb352-ca11-42ed-9d3d-296e3747292f-kube-api-access-7ctpv\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4\" (UID: \"6a7fb352-ca11-42ed-9d3d-296e3747292f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" Nov 27 17:19:48 crc kubenswrapper[4792]: I1127 17:19:48.700295 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" Nov 27 17:19:49 crc kubenswrapper[4792]: I1127 17:19:49.090192 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4"] Nov 27 17:19:49 crc kubenswrapper[4792]: I1127 17:19:49.895747 4792 generic.go:334] "Generic (PLEG): container finished" podID="6a7fb352-ca11-42ed-9d3d-296e3747292f" containerID="3d18c5c4f88f7869e7c70a472f772a76ea4096c5cd78ac357b09074d4940dca6" exitCode=0 Nov 27 17:19:49 crc kubenswrapper[4792]: I1127 17:19:49.895810 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" event={"ID":"6a7fb352-ca11-42ed-9d3d-296e3747292f","Type":"ContainerDied","Data":"3d18c5c4f88f7869e7c70a472f772a76ea4096c5cd78ac357b09074d4940dca6"} Nov 27 17:19:49 crc kubenswrapper[4792]: I1127 17:19:49.896070 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" event={"ID":"6a7fb352-ca11-42ed-9d3d-296e3747292f","Type":"ContainerStarted","Data":"c6cb68fbf0f7a5a0b8bad24dccb7549d1efce9c47ea29c187cf58dd9f91c0485"} Nov 27 17:19:49 crc kubenswrapper[4792]: I1127 17:19:49.898174 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 17:19:51 crc kubenswrapper[4792]: I1127 17:19:51.926083 4792 generic.go:334] "Generic (PLEG): container finished" podID="6a7fb352-ca11-42ed-9d3d-296e3747292f" containerID="6df75c46ca47d22d13df6124566d19e13f6d62973ccf2ffe8a04a121036c9a04" exitCode=0 Nov 27 17:19:51 crc kubenswrapper[4792]: I1127 17:19:51.926177 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" event={"ID":"6a7fb352-ca11-42ed-9d3d-296e3747292f","Type":"ContainerDied","Data":"6df75c46ca47d22d13df6124566d19e13f6d62973ccf2ffe8a04a121036c9a04"} Nov 27 17:19:52 crc kubenswrapper[4792]: I1127 17:19:52.936352 4792 generic.go:334] "Generic (PLEG): container finished" podID="6a7fb352-ca11-42ed-9d3d-296e3747292f" containerID="cde66d44f4120f8f61103f20c81587bc106958424aaad8d3842355d2ff18cc22" exitCode=0 Nov 27 17:19:52 crc kubenswrapper[4792]: I1127 17:19:52.936447 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" event={"ID":"6a7fb352-ca11-42ed-9d3d-296e3747292f","Type":"ContainerDied","Data":"cde66d44f4120f8f61103f20c81587bc106958424aaad8d3842355d2ff18cc22"} Nov 27 17:19:54 crc kubenswrapper[4792]: I1127 17:19:54.275061 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" Nov 27 17:19:54 crc kubenswrapper[4792]: I1127 17:19:54.406215 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ctpv\" (UniqueName: \"kubernetes.io/projected/6a7fb352-ca11-42ed-9d3d-296e3747292f-kube-api-access-7ctpv\") pod \"6a7fb352-ca11-42ed-9d3d-296e3747292f\" (UID: \"6a7fb352-ca11-42ed-9d3d-296e3747292f\") " Nov 27 17:19:54 crc kubenswrapper[4792]: I1127 17:19:54.406299 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a7fb352-ca11-42ed-9d3d-296e3747292f-bundle\") pod \"6a7fb352-ca11-42ed-9d3d-296e3747292f\" (UID: \"6a7fb352-ca11-42ed-9d3d-296e3747292f\") " Nov 27 17:19:54 crc kubenswrapper[4792]: I1127 17:19:54.406372 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a7fb352-ca11-42ed-9d3d-296e3747292f-util\") pod \"6a7fb352-ca11-42ed-9d3d-296e3747292f\" (UID: \"6a7fb352-ca11-42ed-9d3d-296e3747292f\") " Nov 27 17:19:54 crc kubenswrapper[4792]: I1127 17:19:54.408723 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a7fb352-ca11-42ed-9d3d-296e3747292f-bundle" (OuterVolumeSpecName: "bundle") pod "6a7fb352-ca11-42ed-9d3d-296e3747292f" (UID: "6a7fb352-ca11-42ed-9d3d-296e3747292f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:19:54 crc kubenswrapper[4792]: I1127 17:19:54.414925 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7fb352-ca11-42ed-9d3d-296e3747292f-kube-api-access-7ctpv" (OuterVolumeSpecName: "kube-api-access-7ctpv") pod "6a7fb352-ca11-42ed-9d3d-296e3747292f" (UID: "6a7fb352-ca11-42ed-9d3d-296e3747292f"). InnerVolumeSpecName "kube-api-access-7ctpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:19:54 crc kubenswrapper[4792]: I1127 17:19:54.420923 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a7fb352-ca11-42ed-9d3d-296e3747292f-util" (OuterVolumeSpecName: "util") pod "6a7fb352-ca11-42ed-9d3d-296e3747292f" (UID: "6a7fb352-ca11-42ed-9d3d-296e3747292f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:19:54 crc kubenswrapper[4792]: I1127 17:19:54.507919 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ctpv\" (UniqueName: \"kubernetes.io/projected/6a7fb352-ca11-42ed-9d3d-296e3747292f-kube-api-access-7ctpv\") on node \"crc\" DevicePath \"\"" Nov 27 17:19:54 crc kubenswrapper[4792]: I1127 17:19:54.507952 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a7fb352-ca11-42ed-9d3d-296e3747292f-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:19:54 crc kubenswrapper[4792]: I1127 17:19:54.507962 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a7fb352-ca11-42ed-9d3d-296e3747292f-util\") on node \"crc\" DevicePath \"\"" Nov 27 17:19:54 crc kubenswrapper[4792]: I1127 17:19:54.957131 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" event={"ID":"6a7fb352-ca11-42ed-9d3d-296e3747292f","Type":"ContainerDied","Data":"c6cb68fbf0f7a5a0b8bad24dccb7549d1efce9c47ea29c187cf58dd9f91c0485"} Nov 27 17:19:54 crc kubenswrapper[4792]: I1127 17:19:54.957210 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6cb68fbf0f7a5a0b8bad24dccb7549d1efce9c47ea29c187cf58dd9f91c0485" Nov 27 17:19:54 crc kubenswrapper[4792]: I1127 17:19:54.957302 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4" Nov 27 17:19:59 crc kubenswrapper[4792]: I1127 17:19:59.518887 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vkjf7"] Nov 27 17:19:59 crc kubenswrapper[4792]: I1127 17:19:59.519862 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovn-controller" containerID="cri-o://01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702" gracePeriod=30 Nov 27 17:19:59 crc kubenswrapper[4792]: I1127 17:19:59.519905 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="sbdb" containerID="cri-o://221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2" gracePeriod=30 Nov 27 17:19:59 crc kubenswrapper[4792]: I1127 17:19:59.519887 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="northd" containerID="cri-o://9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df" gracePeriod=30 Nov 27 17:19:59 crc kubenswrapper[4792]: I1127 17:19:59.519994 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovn-acl-logging" containerID="cri-o://a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb" gracePeriod=30 Nov 27 17:19:59 crc kubenswrapper[4792]: I1127 17:19:59.519954 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="kube-rbac-proxy-node" containerID="cri-o://3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb" gracePeriod=30 Nov 27 17:19:59 crc kubenswrapper[4792]: I1127 17:19:59.519954 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="nbdb" containerID="cri-o://ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931" gracePeriod=30 Nov 27 17:19:59 crc kubenswrapper[4792]: I1127 17:19:59.519978 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee" gracePeriod=30 Nov 27 17:19:59 crc kubenswrapper[4792]: I1127 17:19:59.558327 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" containerID="cri-o://557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa" gracePeriod=30 Nov 27 17:19:59 crc kubenswrapper[4792]: E1127 17:19:59.628706 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 27 17:19:59 crc kubenswrapper[4792]: E1127 17:19:59.629036 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 27 17:19:59 crc kubenswrapper[4792]: E1127 17:19:59.629745 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 27 17:19:59 crc kubenswrapper[4792]: E1127 17:19:59.630857 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 27 17:19:59 crc kubenswrapper[4792]: E1127 17:19:59.634763 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 27 17:19:59 crc kubenswrapper[4792]: E1127 17:19:59.634812 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="sbdb" Nov 27 17:19:59 crc kubenswrapper[4792]: E1127 17:19:59.634898 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 27 17:19:59 crc kubenswrapper[4792]: E1127 17:19:59.634918 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="nbdb" Nov 27 17:19:59 crc kubenswrapper[4792]: I1127 17:19:59.997987 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovnkube-controller/3.log" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.002067 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovn-acl-logging/0.log" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.003048 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovn-controller/0.log" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.003929 4792 generic.go:334] "Generic (PLEG): container finished" podID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerID="557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa" exitCode=0 Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.003970 4792 generic.go:334] "Generic (PLEG): container finished" podID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerID="221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2" exitCode=0 Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.003988 4792 generic.go:334] "Generic (PLEG): container finished" podID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerID="ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931" exitCode=0 Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.004005 4792 generic.go:334] "Generic (PLEG): container finished" podID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerID="9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df" exitCode=0 Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.004019 4792 generic.go:334] "Generic (PLEG): container finished" podID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerID="a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb" exitCode=143 Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.004033 4792 generic.go:334] "Generic (PLEG): container finished" podID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerID="01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702" exitCode=143 Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.004045 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerDied","Data":"557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa"} Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.004106 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerDied","Data":"221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2"} Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.004142 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerDied","Data":"ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931"} Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.004160 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerDied","Data":"9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df"} Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.004182 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerDied","Data":"a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb"} Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.004201 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerDied","Data":"01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702"} Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.004222 4792 scope.go:117] "RemoveContainer" containerID="40a574fdad7302e90fd2edfc2c0aee23ac21283a21c7378695c4d6b8b292cd8b" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.008605 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gbrqr_71907161-f8b0-4b44-b61a-0e04200083f0/kube-multus/2.log" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.009484 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gbrqr_71907161-f8b0-4b44-b61a-0e04200083f0/kube-multus/1.log" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.009544 4792 generic.go:334] "Generic (PLEG): container finished" podID="71907161-f8b0-4b44-b61a-0e04200083f0" containerID="bf643a2c9717f4792a8748a68706497c448539b1e60802be4934fccee2c8e838" exitCode=2 Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.009581 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gbrqr" event={"ID":"71907161-f8b0-4b44-b61a-0e04200083f0","Type":"ContainerDied","Data":"bf643a2c9717f4792a8748a68706497c448539b1e60802be4934fccee2c8e838"} Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.010394 4792 scope.go:117] "RemoveContainer" containerID="bf643a2c9717f4792a8748a68706497c448539b1e60802be4934fccee2c8e838" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.010794 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gbrqr_openshift-multus(71907161-f8b0-4b44-b61a-0e04200083f0)\"" pod="openshift-multus/multus-gbrqr" podUID="71907161-f8b0-4b44-b61a-0e04200083f0" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.057364 4792 scope.go:117] "RemoveContainer" containerID="0b5d62f1bb8549b4cab2f42f0a65d518448affc1b1a72191be9f84521bd247b9" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.728277 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovn-acl-logging/0.log" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.729076 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovn-controller/0.log" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.729590 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786180 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ffvgv"] Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786382 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786397 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786406 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786414 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786423 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="kube-rbac-proxy-node" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786430 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="kube-rbac-proxy-node" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786439 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovn-acl-logging" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786445 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovn-acl-logging" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786453 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="nbdb" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786458 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="nbdb" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786466 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7fb352-ca11-42ed-9d3d-296e3747292f" containerName="util" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786471 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7fb352-ca11-42ed-9d3d-296e3747292f" containerName="util" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786478 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="sbdb" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786484 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="sbdb" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786492 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7fb352-ca11-42ed-9d3d-296e3747292f" containerName="pull" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786498 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7fb352-ca11-42ed-9d3d-296e3747292f" containerName="pull" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786508 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7fb352-ca11-42ed-9d3d-296e3747292f" containerName="extract" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786513 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7fb352-ca11-42ed-9d3d-296e3747292f" containerName="extract" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786523 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="kube-rbac-proxy-ovn-metrics" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786528 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="kube-rbac-proxy-ovn-metrics" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786537 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786543 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786552 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="kubecfg-setup" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786558 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="kubecfg-setup" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786564 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786570 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786582 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="northd" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786587 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="northd" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786594 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovn-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786599 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovn-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786710 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovn-acl-logging" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786723 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786730 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786736 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7fb352-ca11-42ed-9d3d-296e3747292f" containerName="extract" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786743 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="sbdb" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786750 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="nbdb" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786757 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="kube-rbac-proxy-ovn-metrics" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786767 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786775 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovn-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786784 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="northd" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786792 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="kube-rbac-proxy-node" Nov 27 17:20:00 crc kubenswrapper[4792]: E1127 17:20:00.786901 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.786908 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.787018 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.787215 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerName="ovnkube-controller" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.788738 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.798757 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-run-netns\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.798792 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-systemd-units\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.798811 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-cni-netd\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.798846 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-log-socket\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.798886 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovn-node-metrics-cert\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.798915 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-openvswitch\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.798879 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.798940 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-var-lib-openvswitch\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.798888 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.798950 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.798977 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-ovn\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.798911 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-log-socket" (OuterVolumeSpecName: "log-socket") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.798911 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.798998 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-etc-openvswitch\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799029 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovnkube-config\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799010 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799052 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799056 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-kubelet\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799070 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799097 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799123 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-node-log\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799155 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799154 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-node-log" (OuterVolumeSpecName: "node-log") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799206 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovnkube-script-lib\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799248 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c892m\" (UniqueName: \"kubernetes.io/projected/cd5ee573-9a50-4d09-b129-fb461db20cf6-kube-api-access-c892m\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799264 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799272 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-run-ovn-kubernetes\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799295 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799331 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-env-overrides\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799352 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-slash\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799367 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-systemd\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799403 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-cni-bin\") pod \"cd5ee573-9a50-4d09-b129-fb461db20cf6\" (UID: \"cd5ee573-9a50-4d09-b129-fb461db20cf6\") " Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799549 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799599 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-slash" (OuterVolumeSpecName: "host-slash") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799668 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799697 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799945 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799965 4792 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.799988 4792 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.800014 4792 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.800022 4792 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-node-log\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.800031 4792 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.800041 4792 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.800050 4792 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.800062 4792 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-slash\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.800072 4792 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.800083 4792 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.800093 4792 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.800104 4792 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.800116 4792 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-log-socket\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.800125 4792 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.800133 4792 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.800141 4792 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.806174 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.807101 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd5ee573-9a50-4d09-b129-fb461db20cf6-kube-api-access-c892m" (OuterVolumeSpecName: "kube-api-access-c892m") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "kube-api-access-c892m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.823163 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cd5ee573-9a50-4d09-b129-fb461db20cf6" (UID: "cd5ee573-9a50-4d09-b129-fb461db20cf6"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.901615 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ck46\" (UniqueName: \"kubernetes.io/projected/12c72f62-8b9f-47b6-9667-8e504d239402-kube-api-access-4ck46\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.901951 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-cni-netd\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.901971 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-run-openvswitch\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.901988 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-etc-openvswitch\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902014 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/12c72f62-8b9f-47b6-9667-8e504d239402-ovnkube-config\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902033 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-node-log\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902054 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-var-lib-openvswitch\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902120 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-run-ovn-kubernetes\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902163 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-run-netns\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902201 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-kubelet\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902334 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/12c72f62-8b9f-47b6-9667-8e504d239402-ovn-node-metrics-cert\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902361 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/12c72f62-8b9f-47b6-9667-8e504d239402-ovnkube-script-lib\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902391 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-slash\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902456 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-run-systemd\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902538 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/12c72f62-8b9f-47b6-9667-8e504d239402-env-overrides\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902562 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902604 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-run-ovn\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902722 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-cni-bin\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902799 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-log-socket\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902825 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-systemd-units\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902966 4792 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd5ee573-9a50-4d09-b129-fb461db20cf6-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.902991 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c892m\" (UniqueName: \"kubernetes.io/projected/cd5ee573-9a50-4d09-b129-fb461db20cf6-kube-api-access-c892m\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.903004 4792 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd5ee573-9a50-4d09-b129-fb461db20cf6-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:00 crc kubenswrapper[4792]: I1127 17:20:00.903015 4792 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd5ee573-9a50-4d09-b129-fb461db20cf6-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004233 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-cni-netd\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004289 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-run-openvswitch\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004308 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-etc-openvswitch\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004333 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/12c72f62-8b9f-47b6-9667-8e504d239402-ovnkube-config\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004350 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-node-log\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004372 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-var-lib-openvswitch\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004394 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-run-ovn-kubernetes\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004411 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-run-netns\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004435 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-kubelet\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004434 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-run-openvswitch\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004477 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-run-ovn-kubernetes\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004500 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-run-netns\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004529 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-kubelet\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004552 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-node-log\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004556 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-var-lib-openvswitch\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004452 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/12c72f62-8b9f-47b6-9667-8e504d239402-ovn-node-metrics-cert\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004446 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-etc-openvswitch\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004582 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/12c72f62-8b9f-47b6-9667-8e504d239402-ovnkube-script-lib\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004603 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-slash\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004617 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-run-systemd\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004637 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004674 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/12c72f62-8b9f-47b6-9667-8e504d239402-env-overrides\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004698 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-run-ovn\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004715 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-cni-bin\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004740 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-log-socket\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004754 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-systemd-units\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004773 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ck46\" (UniqueName: \"kubernetes.io/projected/12c72f62-8b9f-47b6-9667-8e504d239402-kube-api-access-4ck46\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.005078 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-run-systemd\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.005103 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.005128 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-cni-bin\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.004672 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-slash\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.005174 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-run-ovn\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.005212 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/12c72f62-8b9f-47b6-9667-8e504d239402-ovnkube-config\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.005244 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-log-socket\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.005250 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-systemd-units\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.005297 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/12c72f62-8b9f-47b6-9667-8e504d239402-ovnkube-script-lib\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.005623 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/12c72f62-8b9f-47b6-9667-8e504d239402-env-overrides\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.005711 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12c72f62-8b9f-47b6-9667-8e504d239402-host-cni-netd\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.011165 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/12c72f62-8b9f-47b6-9667-8e504d239402-ovn-node-metrics-cert\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.019482 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovn-acl-logging/0.log" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.019994 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vkjf7_cd5ee573-9a50-4d09-b129-fb461db20cf6/ovn-controller/0.log" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.020584 4792 generic.go:334] "Generic (PLEG): container finished" podID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerID="4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee" exitCode=0 Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.020612 4792 generic.go:334] "Generic (PLEG): container finished" podID="cd5ee573-9a50-4d09-b129-fb461db20cf6" containerID="3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb" exitCode=0 Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.020685 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerDied","Data":"4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee"} Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.020712 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerDied","Data":"3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb"} Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.020725 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" event={"ID":"cd5ee573-9a50-4d09-b129-fb461db20cf6","Type":"ContainerDied","Data":"a32bcd84a5d310b7576d8b3fced23f0da4d251ca928ad554a63957549ecae5f8"} Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.020743 4792 scope.go:117] "RemoveContainer" containerID="557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.020926 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vkjf7" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.023498 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ck46\" (UniqueName: \"kubernetes.io/projected/12c72f62-8b9f-47b6-9667-8e504d239402-kube-api-access-4ck46\") pod \"ovnkube-node-ffvgv\" (UID: \"12c72f62-8b9f-47b6-9667-8e504d239402\") " pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.024898 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gbrqr_71907161-f8b0-4b44-b61a-0e04200083f0/kube-multus/2.log" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.058360 4792 scope.go:117] "RemoveContainer" containerID="221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.069658 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vkjf7"] Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.073986 4792 scope.go:117] "RemoveContainer" containerID="ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.080365 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vkjf7"] Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.090570 4792 scope.go:117] "RemoveContainer" containerID="9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.101261 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.105463 4792 scope.go:117] "RemoveContainer" containerID="4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.123605 4792 scope.go:117] "RemoveContainer" containerID="3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.142435 4792 scope.go:117] "RemoveContainer" containerID="a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.173807 4792 scope.go:117] "RemoveContainer" containerID="01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.209992 4792 scope.go:117] "RemoveContainer" containerID="ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.231885 4792 scope.go:117] "RemoveContainer" containerID="557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa" Nov 27 17:20:01 crc kubenswrapper[4792]: E1127 17:20:01.232379 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa\": container with ID starting with 557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa not found: ID does not exist" containerID="557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.232422 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa"} err="failed to get container status \"557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa\": rpc error: code = NotFound desc = could not find container \"557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa\": container with ID starting with 557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.232449 4792 scope.go:117] "RemoveContainer" containerID="221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2" Nov 27 17:20:01 crc kubenswrapper[4792]: E1127 17:20:01.232726 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\": container with ID starting with 221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2 not found: ID does not exist" containerID="221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.232749 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2"} err="failed to get container status \"221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\": rpc error: code = NotFound desc = could not find container \"221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\": container with ID starting with 221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2 not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.232763 4792 scope.go:117] "RemoveContainer" containerID="ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931" Nov 27 17:20:01 crc kubenswrapper[4792]: E1127 17:20:01.232997 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\": container with ID starting with ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931 not found: ID does not exist" containerID="ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.233026 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931"} err="failed to get container status \"ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\": rpc error: code = NotFound desc = could not find container \"ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\": container with ID starting with ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931 not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.233040 4792 scope.go:117] "RemoveContainer" containerID="9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df" Nov 27 17:20:01 crc kubenswrapper[4792]: E1127 17:20:01.233249 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\": container with ID starting with 9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df not found: ID does not exist" containerID="9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.233271 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df"} err="failed to get container status \"9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\": rpc error: code = NotFound desc = could not find container \"9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\": container with ID starting with 9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.233285 4792 scope.go:117] "RemoveContainer" containerID="4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee" Nov 27 17:20:01 crc kubenswrapper[4792]: E1127 17:20:01.233495 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\": container with ID starting with 4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee not found: ID does not exist" containerID="4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.233516 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee"} err="failed to get container status \"4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\": rpc error: code = NotFound desc = could not find container \"4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\": container with ID starting with 4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.233529 4792 scope.go:117] "RemoveContainer" containerID="3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb" Nov 27 17:20:01 crc kubenswrapper[4792]: E1127 17:20:01.233755 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\": container with ID starting with 3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb not found: ID does not exist" containerID="3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.233774 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb"} err="failed to get container status \"3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\": rpc error: code = NotFound desc = could not find container \"3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\": container with ID starting with 3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.233787 4792 scope.go:117] "RemoveContainer" containerID="a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb" Nov 27 17:20:01 crc kubenswrapper[4792]: E1127 17:20:01.233986 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\": container with ID starting with a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb not found: ID does not exist" containerID="a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.234007 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb"} err="failed to get container status \"a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\": rpc error: code = NotFound desc = could not find container \"a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\": container with ID starting with a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.234022 4792 scope.go:117] "RemoveContainer" containerID="01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702" Nov 27 17:20:01 crc kubenswrapper[4792]: E1127 17:20:01.234208 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\": container with ID starting with 01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702 not found: ID does not exist" containerID="01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.234226 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702"} err="failed to get container status \"01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\": rpc error: code = NotFound desc = could not find container \"01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\": container with ID starting with 01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702 not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.234239 4792 scope.go:117] "RemoveContainer" containerID="ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934" Nov 27 17:20:01 crc kubenswrapper[4792]: E1127 17:20:01.234453 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\": container with ID starting with ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934 not found: ID does not exist" containerID="ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.234472 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934"} err="failed to get container status \"ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\": rpc error: code = NotFound desc = could not find container \"ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\": container with ID starting with ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934 not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.234488 4792 scope.go:117] "RemoveContainer" containerID="557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.234733 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa"} err="failed to get container status \"557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa\": rpc error: code = NotFound desc = could not find container \"557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa\": container with ID starting with 557a33f088a10c9c4966cd4b3a4629878e00eb4b2abaee5ee25999c83f4c56fa not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.234755 4792 scope.go:117] "RemoveContainer" containerID="221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.234967 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2"} err="failed to get container status \"221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\": rpc error: code = NotFound desc = could not find container \"221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2\": container with ID starting with 221b401c68870aed30a469d738ddc664df732d8ddb9771783b08483c1b551ce2 not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.234984 4792 scope.go:117] "RemoveContainer" containerID="ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.235173 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931"} err="failed to get container status \"ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\": rpc error: code = NotFound desc = could not find container \"ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931\": container with ID starting with ebfb9d9f63ec4dcb2f9196679855b0802e7e645ce64fa976bcd1672b25a81931 not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.235195 4792 scope.go:117] "RemoveContainer" containerID="9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.235381 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df"} err="failed to get container status \"9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\": rpc error: code = NotFound desc = could not find container \"9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df\": container with ID starting with 9deb664c4253fd22a3584d1e589010bfeb5a179028d299a08bced664e7be62df not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.235398 4792 scope.go:117] "RemoveContainer" containerID="4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.235601 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee"} err="failed to get container status \"4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\": rpc error: code = NotFound desc = could not find container \"4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee\": container with ID starting with 4f7a94aefc7bf1e56f88ff63c13ff5e9c7e14dbdda19fe2d23d37910fa042dee not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.235618 4792 scope.go:117] "RemoveContainer" containerID="3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.236032 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb"} err="failed to get container status \"3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\": rpc error: code = NotFound desc = could not find container \"3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb\": container with ID starting with 3898ad294ade4580903708898ac428819105a57dc58eac56d1bc5452b023c6bb not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.236050 4792 scope.go:117] "RemoveContainer" containerID="a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.236272 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb"} err="failed to get container status \"a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\": rpc error: code = NotFound desc = could not find container \"a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb\": container with ID starting with a4bdcacf54f8fed792e0b903f217ffda2a491c8442c6cb44756f02c21870e2bb not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.236288 4792 scope.go:117] "RemoveContainer" containerID="01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.236481 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702"} err="failed to get container status \"01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\": rpc error: code = NotFound desc = could not find container \"01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702\": container with ID starting with 01244e2a3b9fda25a474a9aafd4a2ef0b27a25254f9feecd2d9fc09b0f821702 not found: ID does not exist" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.236499 4792 scope.go:117] "RemoveContainer" containerID="ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934" Nov 27 17:20:01 crc kubenswrapper[4792]: I1127 17:20:01.236919 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934"} err="failed to get container status \"ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\": rpc error: code = NotFound desc = could not find container \"ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934\": container with ID starting with ec1265a3b96e7638988802a787216c1207ef184fc86ec9da8c170c6316cbc934 not found: ID does not exist" Nov 27 17:20:02 crc kubenswrapper[4792]: I1127 17:20:02.032476 4792 generic.go:334] "Generic (PLEG): container finished" podID="12c72f62-8b9f-47b6-9667-8e504d239402" containerID="2e30b4cb8fb443fcaf7dd504b846a97a44d6b84c5c0fca819342a0680fac64ed" exitCode=0 Nov 27 17:20:02 crc kubenswrapper[4792]: I1127 17:20:02.032558 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" event={"ID":"12c72f62-8b9f-47b6-9667-8e504d239402","Type":"ContainerDied","Data":"2e30b4cb8fb443fcaf7dd504b846a97a44d6b84c5c0fca819342a0680fac64ed"} Nov 27 17:20:02 crc kubenswrapper[4792]: I1127 17:20:02.032859 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" event={"ID":"12c72f62-8b9f-47b6-9667-8e504d239402","Type":"ContainerStarted","Data":"238c4dbce5709625a76ba8229b9cf60566252dc5ec54b4f5b13590346c88795d"} Nov 27 17:20:02 crc kubenswrapper[4792]: I1127 17:20:02.696202 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd5ee573-9a50-4d09-b129-fb461db20cf6" path="/var/lib/kubelet/pods/cd5ee573-9a50-4d09-b129-fb461db20cf6/volumes" Nov 27 17:20:03 crc kubenswrapper[4792]: I1127 17:20:03.052703 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" event={"ID":"12c72f62-8b9f-47b6-9667-8e504d239402","Type":"ContainerStarted","Data":"fda24c8f1baea2cb8d4d2286d1f9c5fec5957ddb021e78d8378f134e88bca2a8"} Nov 27 17:20:03 crc kubenswrapper[4792]: I1127 17:20:03.052742 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" event={"ID":"12c72f62-8b9f-47b6-9667-8e504d239402","Type":"ContainerStarted","Data":"0fea26d158ce061dbaea0c0703fb9b06fcee63f5a77efb8f7138d7c3d4faed80"} Nov 27 17:20:03 crc kubenswrapper[4792]: I1127 17:20:03.052754 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" event={"ID":"12c72f62-8b9f-47b6-9667-8e504d239402","Type":"ContainerStarted","Data":"3fa6111ca2f12cd696eec6130fcbced1518c96e2de7971ea0dba257dc5c97ff9"} Nov 27 17:20:03 crc kubenswrapper[4792]: I1127 17:20:03.052765 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" event={"ID":"12c72f62-8b9f-47b6-9667-8e504d239402","Type":"ContainerStarted","Data":"6893b768ec0bf834b3599f82358f25a6db21b926a7339b7b6254704d792c6961"} Nov 27 17:20:03 crc kubenswrapper[4792]: I1127 17:20:03.052774 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" event={"ID":"12c72f62-8b9f-47b6-9667-8e504d239402","Type":"ContainerStarted","Data":"4349130afda9d8bc73e22f0ebf5e2fd6246b2758873bf8218e0a812d51d87e0d"} Nov 27 17:20:03 crc kubenswrapper[4792]: I1127 17:20:03.052783 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" event={"ID":"12c72f62-8b9f-47b6-9667-8e504d239402","Type":"ContainerStarted","Data":"1ffe47a4aea662e7bbb62a801ce2c25cde8d92629f4ef448812d21eb724a7c31"} Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.075927 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" event={"ID":"12c72f62-8b9f-47b6-9667-8e504d239402","Type":"ContainerStarted","Data":"7089b3aee93fd98a478eba7c1bbccdbe0de2c658a760d99cd8c3f7eca4b2408b"} Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.094890 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt"] Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.095618 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.098038 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.098156 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.098222 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-cb6p2" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.174716 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqq4k\" (UniqueName: \"kubernetes.io/projected/f6463a7b-af91-4c4a-b67c-10f17f30becd-kube-api-access-tqq4k\") pod \"obo-prometheus-operator-668cf9dfbb-xwzqt\" (UID: \"f6463a7b-af91-4c4a-b67c-10f17f30becd\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.184890 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw"] Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.185668 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.188193 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-pjthz" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.188427 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.204153 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w"] Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.205079 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.276359 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqq4k\" (UniqueName: \"kubernetes.io/projected/f6463a7b-af91-4c4a-b67c-10f17f30becd-kube-api-access-tqq4k\") pod \"obo-prometheus-operator-668cf9dfbb-xwzqt\" (UID: \"f6463a7b-af91-4c4a-b67c-10f17f30becd\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.276425 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6be8d975-0d93-42ba-9184-21f36ab98ac9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w\" (UID: \"6be8d975-0d93-42ba-9184-21f36ab98ac9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.276457 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/deae3170-952b-45e4-9527-ce9b37f90359-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw\" (UID: \"deae3170-952b-45e4-9527-ce9b37f90359\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.276479 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6be8d975-0d93-42ba-9184-21f36ab98ac9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w\" (UID: \"6be8d975-0d93-42ba-9184-21f36ab98ac9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.276501 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/deae3170-952b-45e4-9527-ce9b37f90359-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw\" (UID: \"deae3170-952b-45e4-9527-ce9b37f90359\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.296337 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqq4k\" (UniqueName: \"kubernetes.io/projected/f6463a7b-af91-4c4a-b67c-10f17f30becd-kube-api-access-tqq4k\") pod \"obo-prometheus-operator-668cf9dfbb-xwzqt\" (UID: \"f6463a7b-af91-4c4a-b67c-10f17f30becd\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.329776 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-qdcnd"] Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.330500 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.332380 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.337925 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-9kcdf" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.378153 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6be8d975-0d93-42ba-9184-21f36ab98ac9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w\" (UID: \"6be8d975-0d93-42ba-9184-21f36ab98ac9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.378218 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/deae3170-952b-45e4-9527-ce9b37f90359-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw\" (UID: \"deae3170-952b-45e4-9527-ce9b37f90359\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.378260 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2p98\" (UniqueName: \"kubernetes.io/projected/b4f46dd1-954f-497f-b491-a3df62aafda6-kube-api-access-n2p98\") pod \"observability-operator-d8bb48f5d-qdcnd\" (UID: \"b4f46dd1-954f-497f-b491-a3df62aafda6\") " pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.378292 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6be8d975-0d93-42ba-9184-21f36ab98ac9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w\" (UID: \"6be8d975-0d93-42ba-9184-21f36ab98ac9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.378327 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/deae3170-952b-45e4-9527-ce9b37f90359-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw\" (UID: \"deae3170-952b-45e4-9527-ce9b37f90359\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.378354 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4f46dd1-954f-497f-b491-a3df62aafda6-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-qdcnd\" (UID: \"b4f46dd1-954f-497f-b491-a3df62aafda6\") " pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.382960 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6be8d975-0d93-42ba-9184-21f36ab98ac9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w\" (UID: \"6be8d975-0d93-42ba-9184-21f36ab98ac9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.382975 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/deae3170-952b-45e4-9527-ce9b37f90359-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw\" (UID: \"deae3170-952b-45e4-9527-ce9b37f90359\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.382960 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6be8d975-0d93-42ba-9184-21f36ab98ac9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w\" (UID: \"6be8d975-0d93-42ba-9184-21f36ab98ac9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.382975 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/deae3170-952b-45e4-9527-ce9b37f90359-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw\" (UID: \"deae3170-952b-45e4-9527-ce9b37f90359\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.412372 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.438272 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators_f6463a7b-af91-4c4a-b67c-10f17f30becd_0(48f6f91598b8b1665a3d599a79999049a0fa2b642f2761b7f348ad38ee210222): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.438345 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators_f6463a7b-af91-4c4a-b67c-10f17f30becd_0(48f6f91598b8b1665a3d599a79999049a0fa2b642f2761b7f348ad38ee210222): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.438365 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators_f6463a7b-af91-4c4a-b67c-10f17f30becd_0(48f6f91598b8b1665a3d599a79999049a0fa2b642f2761b7f348ad38ee210222): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.438410 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators(f6463a7b-af91-4c4a-b67c-10f17f30becd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators(f6463a7b-af91-4c4a-b67c-10f17f30becd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators_f6463a7b-af91-4c4a-b67c-10f17f30becd_0(48f6f91598b8b1665a3d599a79999049a0fa2b642f2761b7f348ad38ee210222): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" podUID="f6463a7b-af91-4c4a-b67c-10f17f30becd" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.480059 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2p98\" (UniqueName: \"kubernetes.io/projected/b4f46dd1-954f-497f-b491-a3df62aafda6-kube-api-access-n2p98\") pod \"observability-operator-d8bb48f5d-qdcnd\" (UID: \"b4f46dd1-954f-497f-b491-a3df62aafda6\") " pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.480123 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4f46dd1-954f-497f-b491-a3df62aafda6-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-qdcnd\" (UID: \"b4f46dd1-954f-497f-b491-a3df62aafda6\") " pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.483125 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b4f46dd1-954f-497f-b491-a3df62aafda6-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-qdcnd\" (UID: \"b4f46dd1-954f-497f-b491-a3df62aafda6\") " pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.500947 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2p98\" (UniqueName: \"kubernetes.io/projected/b4f46dd1-954f-497f-b491-a3df62aafda6-kube-api-access-n2p98\") pod \"observability-operator-d8bb48f5d-qdcnd\" (UID: \"b4f46dd1-954f-497f-b491-a3df62aafda6\") " pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.502853 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.514027 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-pjkg6"] Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.519791 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.520314 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.524618 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-n8bcl" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.546168 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators_deae3170-952b-45e4-9527-ce9b37f90359_0(018cc4a3f79f035bb039d56396b072d2514ad4c480b06b92fb181e93edfb2277): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.546226 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators_deae3170-952b-45e4-9527-ce9b37f90359_0(018cc4a3f79f035bb039d56396b072d2514ad4c480b06b92fb181e93edfb2277): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.546248 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators_deae3170-952b-45e4-9527-ce9b37f90359_0(018cc4a3f79f035bb039d56396b072d2514ad4c480b06b92fb181e93edfb2277): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.546294 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators(deae3170-952b-45e4-9527-ce9b37f90359)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators(deae3170-952b-45e4-9527-ce9b37f90359)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators_deae3170-952b-45e4-9527-ce9b37f90359_0(018cc4a3f79f035bb039d56396b072d2514ad4c480b06b92fb181e93edfb2277): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" podUID="deae3170-952b-45e4-9527-ce9b37f90359" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.557250 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators_6be8d975-0d93-42ba-9184-21f36ab98ac9_0(fae3d6e554275c6b093e619c49520b578d5d0876592eacd0b94b490b5191e46b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.557306 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators_6be8d975-0d93-42ba-9184-21f36ab98ac9_0(fae3d6e554275c6b093e619c49520b578d5d0876592eacd0b94b490b5191e46b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.557327 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators_6be8d975-0d93-42ba-9184-21f36ab98ac9_0(fae3d6e554275c6b093e619c49520b578d5d0876592eacd0b94b490b5191e46b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.557374 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators(6be8d975-0d93-42ba-9184-21f36ab98ac9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators(6be8d975-0d93-42ba-9184-21f36ab98ac9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators_6be8d975-0d93-42ba-9184-21f36ab98ac9_0(fae3d6e554275c6b093e619c49520b578d5d0876592eacd0b94b490b5191e46b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" podUID="6be8d975-0d93-42ba-9184-21f36ab98ac9" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.582502 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6wsr\" (UniqueName: \"kubernetes.io/projected/44e4a3bf-3593-4c1e-b9cc-4c294ed26692-kube-api-access-p6wsr\") pod \"perses-operator-5446b9c989-pjkg6\" (UID: \"44e4a3bf-3593-4c1e-b9cc-4c294ed26692\") " pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.582758 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/44e4a3bf-3593-4c1e-b9cc-4c294ed26692-openshift-service-ca\") pod \"perses-operator-5446b9c989-pjkg6\" (UID: \"44e4a3bf-3593-4c1e-b9cc-4c294ed26692\") " pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.651889 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.683292 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-qdcnd_openshift-operators_b4f46dd1-954f-497f-b491-a3df62aafda6_0(3765d11d5c023a45bb36fe80f966be6aaa5b9ced05c0ddf69b30bec7104ba7ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.683369 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-qdcnd_openshift-operators_b4f46dd1-954f-497f-b491-a3df62aafda6_0(3765d11d5c023a45bb36fe80f966be6aaa5b9ced05c0ddf69b30bec7104ba7ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.683396 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-qdcnd_openshift-operators_b4f46dd1-954f-497f-b491-a3df62aafda6_0(3765d11d5c023a45bb36fe80f966be6aaa5b9ced05c0ddf69b30bec7104ba7ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.683441 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-qdcnd_openshift-operators(b4f46dd1-954f-497f-b491-a3df62aafda6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-qdcnd_openshift-operators(b4f46dd1-954f-497f-b491-a3df62aafda6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-qdcnd_openshift-operators_b4f46dd1-954f-497f-b491-a3df62aafda6_0(3765d11d5c023a45bb36fe80f966be6aaa5b9ced05c0ddf69b30bec7104ba7ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" podUID="b4f46dd1-954f-497f-b491-a3df62aafda6" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.683970 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/44e4a3bf-3593-4c1e-b9cc-4c294ed26692-openshift-service-ca\") pod \"perses-operator-5446b9c989-pjkg6\" (UID: \"44e4a3bf-3593-4c1e-b9cc-4c294ed26692\") " pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.684045 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6wsr\" (UniqueName: \"kubernetes.io/projected/44e4a3bf-3593-4c1e-b9cc-4c294ed26692-kube-api-access-p6wsr\") pod \"perses-operator-5446b9c989-pjkg6\" (UID: \"44e4a3bf-3593-4c1e-b9cc-4c294ed26692\") " pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.685046 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/44e4a3bf-3593-4c1e-b9cc-4c294ed26692-openshift-service-ca\") pod \"perses-operator-5446b9c989-pjkg6\" (UID: \"44e4a3bf-3593-4c1e-b9cc-4c294ed26692\") " pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.707109 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6wsr\" (UniqueName: \"kubernetes.io/projected/44e4a3bf-3593-4c1e-b9cc-4c294ed26692-kube-api-access-p6wsr\") pod \"perses-operator-5446b9c989-pjkg6\" (UID: \"44e4a3bf-3593-4c1e-b9cc-4c294ed26692\") " pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:06 crc kubenswrapper[4792]: I1127 17:20:06.832792 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.865035 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pjkg6_openshift-operators_44e4a3bf-3593-4c1e-b9cc-4c294ed26692_0(230077221b77927aeee8794f80d018dadf17539b0fc77a58213f7672d58dac43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.865097 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pjkg6_openshift-operators_44e4a3bf-3593-4c1e-b9cc-4c294ed26692_0(230077221b77927aeee8794f80d018dadf17539b0fc77a58213f7672d58dac43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.865120 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pjkg6_openshift-operators_44e4a3bf-3593-4c1e-b9cc-4c294ed26692_0(230077221b77927aeee8794f80d018dadf17539b0fc77a58213f7672d58dac43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:06 crc kubenswrapper[4792]: E1127 17:20:06.865161 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-pjkg6_openshift-operators(44e4a3bf-3593-4c1e-b9cc-4c294ed26692)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-pjkg6_openshift-operators(44e4a3bf-3593-4c1e-b9cc-4c294ed26692)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pjkg6_openshift-operators_44e4a3bf-3593-4c1e-b9cc-4c294ed26692_0(230077221b77927aeee8794f80d018dadf17539b0fc77a58213f7672d58dac43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" podUID="44e4a3bf-3593-4c1e-b9cc-4c294ed26692" Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.089332 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" event={"ID":"12c72f62-8b9f-47b6-9667-8e504d239402","Type":"ContainerStarted","Data":"d40ea54e5d67c09812eab5d6b50067cb76d3f49f52a3a63e2037b21c9456b0b8"} Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.089700 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.089721 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.126187 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" podStartSLOduration=8.126165916 podStartE2EDuration="8.126165916s" podCreationTimestamp="2025-11-27 17:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:20:08.124941055 +0000 UTC m=+630.467767373" watchObservedRunningTime="2025-11-27 17:20:08.126165916 +0000 UTC m=+630.468992234" Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.132879 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.382560 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w"] Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.382704 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.383175 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.398094 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-pjkg6"] Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.398231 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.398727 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.417571 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators_6be8d975-0d93-42ba-9184-21f36ab98ac9_0(2b9ec517fbc53a21b7703857a8f1edf5bd9dd51e8aad17faf7060b432fe74239): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.417631 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators_6be8d975-0d93-42ba-9184-21f36ab98ac9_0(2b9ec517fbc53a21b7703857a8f1edf5bd9dd51e8aad17faf7060b432fe74239): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.417664 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators_6be8d975-0d93-42ba-9184-21f36ab98ac9_0(2b9ec517fbc53a21b7703857a8f1edf5bd9dd51e8aad17faf7060b432fe74239): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.417752 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators(6be8d975-0d93-42ba-9184-21f36ab98ac9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators(6be8d975-0d93-42ba-9184-21f36ab98ac9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators_6be8d975-0d93-42ba-9184-21f36ab98ac9_0(2b9ec517fbc53a21b7703857a8f1edf5bd9dd51e8aad17faf7060b432fe74239): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" podUID="6be8d975-0d93-42ba-9184-21f36ab98ac9" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.436267 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pjkg6_openshift-operators_44e4a3bf-3593-4c1e-b9cc-4c294ed26692_0(d192d041823c978998770f2d9cf7b160a01a6abc2e7926b20201e399d1c9ccca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.436347 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pjkg6_openshift-operators_44e4a3bf-3593-4c1e-b9cc-4c294ed26692_0(d192d041823c978998770f2d9cf7b160a01a6abc2e7926b20201e399d1c9ccca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.436368 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pjkg6_openshift-operators_44e4a3bf-3593-4c1e-b9cc-4c294ed26692_0(d192d041823c978998770f2d9cf7b160a01a6abc2e7926b20201e399d1c9ccca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.436412 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-pjkg6_openshift-operators(44e4a3bf-3593-4c1e-b9cc-4c294ed26692)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-pjkg6_openshift-operators(44e4a3bf-3593-4c1e-b9cc-4c294ed26692)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pjkg6_openshift-operators_44e4a3bf-3593-4c1e-b9cc-4c294ed26692_0(d192d041823c978998770f2d9cf7b160a01a6abc2e7926b20201e399d1c9ccca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" podUID="44e4a3bf-3593-4c1e-b9cc-4c294ed26692" Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.443573 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt"] Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.443724 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.444208 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.468897 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw"] Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.469024 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.469427 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.475614 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators_f6463a7b-af91-4c4a-b67c-10f17f30becd_0(da68e44bf903180c2f21987f1d63dc2203d1696bc93667c04b62e0fa0bb8bb51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.475706 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators_f6463a7b-af91-4c4a-b67c-10f17f30becd_0(da68e44bf903180c2f21987f1d63dc2203d1696bc93667c04b62e0fa0bb8bb51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.475735 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators_f6463a7b-af91-4c4a-b67c-10f17f30becd_0(da68e44bf903180c2f21987f1d63dc2203d1696bc93667c04b62e0fa0bb8bb51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.475790 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators(f6463a7b-af91-4c4a-b67c-10f17f30becd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators(f6463a7b-af91-4c4a-b67c-10f17f30becd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators_f6463a7b-af91-4c4a-b67c-10f17f30becd_0(da68e44bf903180c2f21987f1d63dc2203d1696bc93667c04b62e0fa0bb8bb51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" podUID="f6463a7b-af91-4c4a-b67c-10f17f30becd" Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.495209 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-qdcnd"] Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.495339 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:08 crc kubenswrapper[4792]: I1127 17:20:08.495847 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.514544 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators_deae3170-952b-45e4-9527-ce9b37f90359_0(e2de77a90f8888dda2df74c77137e9fb6b6431db7f5bc1d5b2f4f9e56c1d9dc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.514616 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators_deae3170-952b-45e4-9527-ce9b37f90359_0(e2de77a90f8888dda2df74c77137e9fb6b6431db7f5bc1d5b2f4f9e56c1d9dc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.514672 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators_deae3170-952b-45e4-9527-ce9b37f90359_0(e2de77a90f8888dda2df74c77137e9fb6b6431db7f5bc1d5b2f4f9e56c1d9dc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.514730 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators(deae3170-952b-45e4-9527-ce9b37f90359)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators(deae3170-952b-45e4-9527-ce9b37f90359)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators_deae3170-952b-45e4-9527-ce9b37f90359_0(e2de77a90f8888dda2df74c77137e9fb6b6431db7f5bc1d5b2f4f9e56c1d9dc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" podUID="deae3170-952b-45e4-9527-ce9b37f90359" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.540345 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-qdcnd_openshift-operators_b4f46dd1-954f-497f-b491-a3df62aafda6_0(70f39f0e16da8b01a5edd16ddc47795eb2f05a675697b7655714ae78c1d957db): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.540403 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-qdcnd_openshift-operators_b4f46dd1-954f-497f-b491-a3df62aafda6_0(70f39f0e16da8b01a5edd16ddc47795eb2f05a675697b7655714ae78c1d957db): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.540423 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-qdcnd_openshift-operators_b4f46dd1-954f-497f-b491-a3df62aafda6_0(70f39f0e16da8b01a5edd16ddc47795eb2f05a675697b7655714ae78c1d957db): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:08 crc kubenswrapper[4792]: E1127 17:20:08.540464 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-qdcnd_openshift-operators(b4f46dd1-954f-497f-b491-a3df62aafda6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-qdcnd_openshift-operators(b4f46dd1-954f-497f-b491-a3df62aafda6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-qdcnd_openshift-operators_b4f46dd1-954f-497f-b491-a3df62aafda6_0(70f39f0e16da8b01a5edd16ddc47795eb2f05a675697b7655714ae78c1d957db): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" podUID="b4f46dd1-954f-497f-b491-a3df62aafda6" Nov 27 17:20:09 crc kubenswrapper[4792]: I1127 17:20:09.097466 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:09 crc kubenswrapper[4792]: I1127 17:20:09.138526 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:14 crc kubenswrapper[4792]: I1127 17:20:14.687160 4792 scope.go:117] "RemoveContainer" containerID="bf643a2c9717f4792a8748a68706497c448539b1e60802be4934fccee2c8e838" Nov 27 17:20:14 crc kubenswrapper[4792]: E1127 17:20:14.687813 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gbrqr_openshift-multus(71907161-f8b0-4b44-b61a-0e04200083f0)\"" pod="openshift-multus/multus-gbrqr" podUID="71907161-f8b0-4b44-b61a-0e04200083f0" Nov 27 17:20:18 crc kubenswrapper[4792]: I1127 17:20:18.685763 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:18 crc kubenswrapper[4792]: I1127 17:20:18.689612 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:18 crc kubenswrapper[4792]: E1127 17:20:18.722156 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pjkg6_openshift-operators_44e4a3bf-3593-4c1e-b9cc-4c294ed26692_0(a8cf586f723bfd0e0d5ef99e312d1ba93d956dbb50400e0d75dbbb9b866b9c4a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:20:18 crc kubenswrapper[4792]: E1127 17:20:18.722580 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pjkg6_openshift-operators_44e4a3bf-3593-4c1e-b9cc-4c294ed26692_0(a8cf586f723bfd0e0d5ef99e312d1ba93d956dbb50400e0d75dbbb9b866b9c4a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:18 crc kubenswrapper[4792]: E1127 17:20:18.722672 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pjkg6_openshift-operators_44e4a3bf-3593-4c1e-b9cc-4c294ed26692_0(a8cf586f723bfd0e0d5ef99e312d1ba93d956dbb50400e0d75dbbb9b866b9c4a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:18 crc kubenswrapper[4792]: E1127 17:20:18.722787 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-pjkg6_openshift-operators(44e4a3bf-3593-4c1e-b9cc-4c294ed26692)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-pjkg6_openshift-operators(44e4a3bf-3593-4c1e-b9cc-4c294ed26692)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pjkg6_openshift-operators_44e4a3bf-3593-4c1e-b9cc-4c294ed26692_0(a8cf586f723bfd0e0d5ef99e312d1ba93d956dbb50400e0d75dbbb9b866b9c4a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" podUID="44e4a3bf-3593-4c1e-b9cc-4c294ed26692" Nov 27 17:20:22 crc kubenswrapper[4792]: I1127 17:20:22.686255 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:22 crc kubenswrapper[4792]: I1127 17:20:22.686686 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:22 crc kubenswrapper[4792]: I1127 17:20:22.687042 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:22 crc kubenswrapper[4792]: I1127 17:20:22.687053 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:22 crc kubenswrapper[4792]: I1127 17:20:22.687314 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:22 crc kubenswrapper[4792]: I1127 17:20:22.687502 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:22 crc kubenswrapper[4792]: E1127 17:20:22.744868 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators_6be8d975-0d93-42ba-9184-21f36ab98ac9_0(0f333daaed1c81ba76cc2f9c834e06131634db22847e09c05ae74d5047404c43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:20:22 crc kubenswrapper[4792]: E1127 17:20:22.744936 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators_6be8d975-0d93-42ba-9184-21f36ab98ac9_0(0f333daaed1c81ba76cc2f9c834e06131634db22847e09c05ae74d5047404c43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:22 crc kubenswrapper[4792]: E1127 17:20:22.744963 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators_6be8d975-0d93-42ba-9184-21f36ab98ac9_0(0f333daaed1c81ba76cc2f9c834e06131634db22847e09c05ae74d5047404c43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:22 crc kubenswrapper[4792]: E1127 17:20:22.745014 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators(6be8d975-0d93-42ba-9184-21f36ab98ac9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators(6be8d975-0d93-42ba-9184-21f36ab98ac9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_openshift-operators_6be8d975-0d93-42ba-9184-21f36ab98ac9_0(0f333daaed1c81ba76cc2f9c834e06131634db22847e09c05ae74d5047404c43): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" podUID="6be8d975-0d93-42ba-9184-21f36ab98ac9" Nov 27 17:20:22 crc kubenswrapper[4792]: E1127 17:20:22.751761 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators_f6463a7b-af91-4c4a-b67c-10f17f30becd_0(fcfa52048e480fdc96ca7d6423abcb06f7df823a8f2339afec96e9b8c247b7bb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:20:22 crc kubenswrapper[4792]: E1127 17:20:22.751815 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators_f6463a7b-af91-4c4a-b67c-10f17f30becd_0(fcfa52048e480fdc96ca7d6423abcb06f7df823a8f2339afec96e9b8c247b7bb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:22 crc kubenswrapper[4792]: E1127 17:20:22.751838 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators_f6463a7b-af91-4c4a-b67c-10f17f30becd_0(fcfa52048e480fdc96ca7d6423abcb06f7df823a8f2339afec96e9b8c247b7bb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:22 crc kubenswrapper[4792]: E1127 17:20:22.751878 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators(f6463a7b-af91-4c4a-b67c-10f17f30becd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators(f6463a7b-af91-4c4a-b67c-10f17f30becd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-xwzqt_openshift-operators_f6463a7b-af91-4c4a-b67c-10f17f30becd_0(fcfa52048e480fdc96ca7d6423abcb06f7df823a8f2339afec96e9b8c247b7bb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" podUID="f6463a7b-af91-4c4a-b67c-10f17f30becd" Nov 27 17:20:22 crc kubenswrapper[4792]: E1127 17:20:22.754732 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators_deae3170-952b-45e4-9527-ce9b37f90359_0(995abbf5ebf47db87877e3b8d06af7a8b2336bbf668d2a05cafd8a1a11d3d0c9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:20:22 crc kubenswrapper[4792]: E1127 17:20:22.754765 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators_deae3170-952b-45e4-9527-ce9b37f90359_0(995abbf5ebf47db87877e3b8d06af7a8b2336bbf668d2a05cafd8a1a11d3d0c9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:22 crc kubenswrapper[4792]: E1127 17:20:22.754780 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators_deae3170-952b-45e4-9527-ce9b37f90359_0(995abbf5ebf47db87877e3b8d06af7a8b2336bbf668d2a05cafd8a1a11d3d0c9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:22 crc kubenswrapper[4792]: E1127 17:20:22.754813 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators(deae3170-952b-45e4-9527-ce9b37f90359)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators(deae3170-952b-45e4-9527-ce9b37f90359)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_openshift-operators_deae3170-952b-45e4-9527-ce9b37f90359_0(995abbf5ebf47db87877e3b8d06af7a8b2336bbf668d2a05cafd8a1a11d3d0c9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" podUID="deae3170-952b-45e4-9527-ce9b37f90359" Nov 27 17:20:23 crc kubenswrapper[4792]: I1127 17:20:23.685635 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:23 crc kubenswrapper[4792]: I1127 17:20:23.686289 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:23 crc kubenswrapper[4792]: E1127 17:20:23.714732 4792 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-qdcnd_openshift-operators_b4f46dd1-954f-497f-b491-a3df62aafda6_0(76e8020cfaa88f6c53c54554a8b8efc02413af2a06c91becf1746614560481fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 27 17:20:23 crc kubenswrapper[4792]: E1127 17:20:23.714833 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-qdcnd_openshift-operators_b4f46dd1-954f-497f-b491-a3df62aafda6_0(76e8020cfaa88f6c53c54554a8b8efc02413af2a06c91becf1746614560481fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:23 crc kubenswrapper[4792]: E1127 17:20:23.714861 4792 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-qdcnd_openshift-operators_b4f46dd1-954f-497f-b491-a3df62aafda6_0(76e8020cfaa88f6c53c54554a8b8efc02413af2a06c91becf1746614560481fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:23 crc kubenswrapper[4792]: E1127 17:20:23.714915 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-qdcnd_openshift-operators(b4f46dd1-954f-497f-b491-a3df62aafda6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-qdcnd_openshift-operators(b4f46dd1-954f-497f-b491-a3df62aafda6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-qdcnd_openshift-operators_b4f46dd1-954f-497f-b491-a3df62aafda6_0(76e8020cfaa88f6c53c54554a8b8efc02413af2a06c91becf1746614560481fb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" podUID="b4f46dd1-954f-497f-b491-a3df62aafda6" Nov 27 17:20:28 crc kubenswrapper[4792]: I1127 17:20:28.690763 4792 scope.go:117] "RemoveContainer" containerID="bf643a2c9717f4792a8748a68706497c448539b1e60802be4934fccee2c8e838" Nov 27 17:20:29 crc kubenswrapper[4792]: I1127 17:20:29.221468 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gbrqr_71907161-f8b0-4b44-b61a-0e04200083f0/kube-multus/2.log" Nov 27 17:20:29 crc kubenswrapper[4792]: I1127 17:20:29.221820 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gbrqr" event={"ID":"71907161-f8b0-4b44-b61a-0e04200083f0","Type":"ContainerStarted","Data":"b03cf422bf2458b5c15039769819f546b92a6189c6feaf8b65f532d806850646"} Nov 27 17:20:31 crc kubenswrapper[4792]: I1127 17:20:31.129171 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ffvgv" Nov 27 17:20:33 crc kubenswrapper[4792]: I1127 17:20:33.685973 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:33 crc kubenswrapper[4792]: I1127 17:20:33.686164 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:33 crc kubenswrapper[4792]: I1127 17:20:33.686555 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" Nov 27 17:20:33 crc kubenswrapper[4792]: I1127 17:20:33.686730 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:33 crc kubenswrapper[4792]: I1127 17:20:33.915898 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt"] Nov 27 17:20:33 crc kubenswrapper[4792]: W1127 17:20:33.986702 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44e4a3bf_3593_4c1e_b9cc_4c294ed26692.slice/crio-72bc509d11f5baf033249d18bb11217e0414c8ea79dd8d67132f38fe82595beb WatchSource:0}: Error finding container 72bc509d11f5baf033249d18bb11217e0414c8ea79dd8d67132f38fe82595beb: Status 404 returned error can't find the container with id 72bc509d11f5baf033249d18bb11217e0414c8ea79dd8d67132f38fe82595beb Nov 27 17:20:33 crc kubenswrapper[4792]: I1127 17:20:33.987498 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-pjkg6"] Nov 27 17:20:34 crc kubenswrapper[4792]: I1127 17:20:34.253969 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" event={"ID":"f6463a7b-af91-4c4a-b67c-10f17f30becd","Type":"ContainerStarted","Data":"460f24a7e3367b9e811b00b87690ee13ff1afb98b372da115625bd4761bc18bc"} Nov 27 17:20:34 crc kubenswrapper[4792]: I1127 17:20:34.254718 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" event={"ID":"44e4a3bf-3593-4c1e-b9cc-4c294ed26692","Type":"ContainerStarted","Data":"72bc509d11f5baf033249d18bb11217e0414c8ea79dd8d67132f38fe82595beb"} Nov 27 17:20:35 crc kubenswrapper[4792]: I1127 17:20:35.686320 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:35 crc kubenswrapper[4792]: I1127 17:20:35.686347 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:35 crc kubenswrapper[4792]: I1127 17:20:35.687045 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" Nov 27 17:20:35 crc kubenswrapper[4792]: I1127 17:20:35.687245 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" Nov 27 17:20:36 crc kubenswrapper[4792]: I1127 17:20:36.158199 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w"] Nov 27 17:20:36 crc kubenswrapper[4792]: I1127 17:20:36.199079 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw"] Nov 27 17:20:36 crc kubenswrapper[4792]: W1127 17:20:36.207135 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddeae3170_952b_45e4_9527_ce9b37f90359.slice/crio-25c7f65f92c230f57bf80111a363185031e6b79b03dedf0508555a2aef7910bf WatchSource:0}: Error finding container 25c7f65f92c230f57bf80111a363185031e6b79b03dedf0508555a2aef7910bf: Status 404 returned error can't find the container with id 25c7f65f92c230f57bf80111a363185031e6b79b03dedf0508555a2aef7910bf Nov 27 17:20:36 crc kubenswrapper[4792]: I1127 17:20:36.265131 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" event={"ID":"deae3170-952b-45e4-9527-ce9b37f90359","Type":"ContainerStarted","Data":"25c7f65f92c230f57bf80111a363185031e6b79b03dedf0508555a2aef7910bf"} Nov 27 17:20:36 crc kubenswrapper[4792]: I1127 17:20:36.266208 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" event={"ID":"6be8d975-0d93-42ba-9184-21f36ab98ac9","Type":"ContainerStarted","Data":"3633e4fc4ac30c24ffee5607d6aba6bac5506fc222305ae1e386121c6d5a8181"} Nov 27 17:20:36 crc kubenswrapper[4792]: I1127 17:20:36.686177 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:36 crc kubenswrapper[4792]: I1127 17:20:36.686842 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:37 crc kubenswrapper[4792]: I1127 17:20:37.154375 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-qdcnd"] Nov 27 17:20:37 crc kubenswrapper[4792]: W1127 17:20:37.169795 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4f46dd1_954f_497f_b491_a3df62aafda6.slice/crio-b9bc8e0d00a9dbd6168d12a35219e8a784f7d9bafc5bcab66f705b218396cc55 WatchSource:0}: Error finding container b9bc8e0d00a9dbd6168d12a35219e8a784f7d9bafc5bcab66f705b218396cc55: Status 404 returned error can't find the container with id b9bc8e0d00a9dbd6168d12a35219e8a784f7d9bafc5bcab66f705b218396cc55 Nov 27 17:20:37 crc kubenswrapper[4792]: I1127 17:20:37.285576 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" event={"ID":"b4f46dd1-954f-497f-b491-a3df62aafda6","Type":"ContainerStarted","Data":"b9bc8e0d00a9dbd6168d12a35219e8a784f7d9bafc5bcab66f705b218396cc55"} Nov 27 17:20:47 crc kubenswrapper[4792]: I1127 17:20:47.365014 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" event={"ID":"deae3170-952b-45e4-9527-ce9b37f90359","Type":"ContainerStarted","Data":"56fedd457c2ca159b7ab1db0ab50587abc55039fc55066041b7545f6aad7211a"} Nov 27 17:20:47 crc kubenswrapper[4792]: I1127 17:20:47.367613 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" event={"ID":"6be8d975-0d93-42ba-9184-21f36ab98ac9","Type":"ContainerStarted","Data":"3a5e73cc820b5409f7e81a5053173ae4ac2fe070d31868c4b6ca53a633f22ba3"} Nov 27 17:20:47 crc kubenswrapper[4792]: I1127 17:20:47.369419 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" event={"ID":"f6463a7b-af91-4c4a-b67c-10f17f30becd","Type":"ContainerStarted","Data":"fde1acac74b5eb0a7fda7e29bf3f0c7e7c6808b6c6df082195f4dbef33b3f670"} Nov 27 17:20:47 crc kubenswrapper[4792]: I1127 17:20:47.370826 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" event={"ID":"b4f46dd1-954f-497f-b491-a3df62aafda6","Type":"ContainerStarted","Data":"d74ed90897b6903ae56e0642a97287a8bb2fa50a507a44bba374f3314cbfb94a"} Nov 27 17:20:47 crc kubenswrapper[4792]: I1127 17:20:47.371022 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:47 crc kubenswrapper[4792]: I1127 17:20:47.372444 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" event={"ID":"44e4a3bf-3593-4c1e-b9cc-4c294ed26692","Type":"ContainerStarted","Data":"7e2202720c74f8da8d884f04f69ab12fa6fd1fe294621ab4f127c7a0c7d7e791"} Nov 27 17:20:47 crc kubenswrapper[4792]: I1127 17:20:47.372674 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:47 crc kubenswrapper[4792]: I1127 17:20:47.373099 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" Nov 27 17:20:47 crc kubenswrapper[4792]: I1127 17:20:47.395590 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw" podStartSLOduration=31.395369242 podStartE2EDuration="41.395562077s" podCreationTimestamp="2025-11-27 17:20:06 +0000 UTC" firstStartedPulling="2025-11-27 17:20:36.210286495 +0000 UTC m=+658.553112813" lastFinishedPulling="2025-11-27 17:20:46.21047928 +0000 UTC m=+668.553305648" observedRunningTime="2025-11-27 17:20:47.38658437 +0000 UTC m=+669.729410698" watchObservedRunningTime="2025-11-27 17:20:47.395562077 +0000 UTC m=+669.738388405" Nov 27 17:20:47 crc kubenswrapper[4792]: I1127 17:20:47.423863 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-xwzqt" podStartSLOduration=29.141914205 podStartE2EDuration="41.423839274s" podCreationTimestamp="2025-11-27 17:20:06 +0000 UTC" firstStartedPulling="2025-11-27 17:20:33.92695097 +0000 UTC m=+656.269777288" lastFinishedPulling="2025-11-27 17:20:46.208876019 +0000 UTC m=+668.551702357" observedRunningTime="2025-11-27 17:20:47.421071704 +0000 UTC m=+669.763898042" watchObservedRunningTime="2025-11-27 17:20:47.423839274 +0000 UTC m=+669.766665592" Nov 27 17:20:47 crc kubenswrapper[4792]: I1127 17:20:47.450489 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w" podStartSLOduration=31.413991305 podStartE2EDuration="41.450458159s" podCreationTimestamp="2025-11-27 17:20:06 +0000 UTC" firstStartedPulling="2025-11-27 17:20:36.173060892 +0000 UTC m=+658.515887200" lastFinishedPulling="2025-11-27 17:20:46.209527736 +0000 UTC m=+668.552354054" observedRunningTime="2025-11-27 17:20:47.445225636 +0000 UTC m=+669.788051984" watchObservedRunningTime="2025-11-27 17:20:47.450458159 +0000 UTC m=+669.793284487" Nov 27 17:20:47 crc kubenswrapper[4792]: I1127 17:20:47.476949 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-qdcnd" podStartSLOduration=32.371054572 podStartE2EDuration="41.476922829s" podCreationTimestamp="2025-11-27 17:20:06 +0000 UTC" firstStartedPulling="2025-11-27 17:20:37.171321594 +0000 UTC m=+659.514147912" lastFinishedPulling="2025-11-27 17:20:46.277189851 +0000 UTC m=+668.620016169" observedRunningTime="2025-11-27 17:20:47.472254241 +0000 UTC m=+669.815080569" watchObservedRunningTime="2025-11-27 17:20:47.476922829 +0000 UTC m=+669.819749157" Nov 27 17:20:47 crc kubenswrapper[4792]: I1127 17:20:47.492229 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" podStartSLOduration=29.270929335 podStartE2EDuration="41.492203987s" podCreationTimestamp="2025-11-27 17:20:06 +0000 UTC" firstStartedPulling="2025-11-27 17:20:33.989057584 +0000 UTC m=+656.331883902" lastFinishedPulling="2025-11-27 17:20:46.210332216 +0000 UTC m=+668.553158554" observedRunningTime="2025-11-27 17:20:47.486214255 +0000 UTC m=+669.829040593" watchObservedRunningTime="2025-11-27 17:20:47.492203987 +0000 UTC m=+669.835030315" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.608023 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-r7tdl"] Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.609333 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-r7tdl" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.612053 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.612951 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.616796 4792 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-gjpkj" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.643296 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-r7tdl"] Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.651324 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-l4gfl"] Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.652167 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-l4gfl" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.654889 4792 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-lqhdk" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.659289 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-l4gfl"] Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.701174 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-zk24k"] Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.702030 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-zk24k" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.706659 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-zk24k"] Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.711061 4792 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-489gl" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.730349 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gj9d\" (UniqueName: \"kubernetes.io/projected/d6ba8d56-25bc-4fa4-bbeb-8412cf566d8f-kube-api-access-8gj9d\") pod \"cert-manager-5b446d88c5-r7tdl\" (UID: \"d6ba8d56-25bc-4fa4-bbeb-8412cf566d8f\") " pod="cert-manager/cert-manager-5b446d88c5-r7tdl" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.831808 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vscvz\" (UniqueName: \"kubernetes.io/projected/ef16aa33-7753-41e0-b78f-533ea2f2dd76-kube-api-access-vscvz\") pod \"cert-manager-cainjector-7f985d654d-zk24k\" (UID: \"ef16aa33-7753-41e0-b78f-533ea2f2dd76\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-zk24k" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.831905 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59q8t\" (UniqueName: \"kubernetes.io/projected/3d4b582d-f8e6-477c-be1e-36f53bbc52e5-kube-api-access-59q8t\") pod \"cert-manager-webhook-5655c58dd6-l4gfl\" (UID: \"3d4b582d-f8e6-477c-be1e-36f53bbc52e5\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-l4gfl" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.831936 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gj9d\" (UniqueName: \"kubernetes.io/projected/d6ba8d56-25bc-4fa4-bbeb-8412cf566d8f-kube-api-access-8gj9d\") pod \"cert-manager-5b446d88c5-r7tdl\" (UID: \"d6ba8d56-25bc-4fa4-bbeb-8412cf566d8f\") " pod="cert-manager/cert-manager-5b446d88c5-r7tdl" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.835304 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-pjkg6" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.852976 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gj9d\" (UniqueName: \"kubernetes.io/projected/d6ba8d56-25bc-4fa4-bbeb-8412cf566d8f-kube-api-access-8gj9d\") pod \"cert-manager-5b446d88c5-r7tdl\" (UID: \"d6ba8d56-25bc-4fa4-bbeb-8412cf566d8f\") " pod="cert-manager/cert-manager-5b446d88c5-r7tdl" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.931355 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-r7tdl" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.932890 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59q8t\" (UniqueName: \"kubernetes.io/projected/3d4b582d-f8e6-477c-be1e-36f53bbc52e5-kube-api-access-59q8t\") pod \"cert-manager-webhook-5655c58dd6-l4gfl\" (UID: \"3d4b582d-f8e6-477c-be1e-36f53bbc52e5\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-l4gfl" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.932988 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vscvz\" (UniqueName: \"kubernetes.io/projected/ef16aa33-7753-41e0-b78f-533ea2f2dd76-kube-api-access-vscvz\") pod \"cert-manager-cainjector-7f985d654d-zk24k\" (UID: \"ef16aa33-7753-41e0-b78f-533ea2f2dd76\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-zk24k" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.957382 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59q8t\" (UniqueName: \"kubernetes.io/projected/3d4b582d-f8e6-477c-be1e-36f53bbc52e5-kube-api-access-59q8t\") pod \"cert-manager-webhook-5655c58dd6-l4gfl\" (UID: \"3d4b582d-f8e6-477c-be1e-36f53bbc52e5\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-l4gfl" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.964684 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vscvz\" (UniqueName: \"kubernetes.io/projected/ef16aa33-7753-41e0-b78f-533ea2f2dd76-kube-api-access-vscvz\") pod \"cert-manager-cainjector-7f985d654d-zk24k\" (UID: \"ef16aa33-7753-41e0-b78f-533ea2f2dd76\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-zk24k" Nov 27 17:20:56 crc kubenswrapper[4792]: I1127 17:20:56.971586 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-l4gfl" Nov 27 17:20:57 crc kubenswrapper[4792]: I1127 17:20:57.018312 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-zk24k" Nov 27 17:20:57 crc kubenswrapper[4792]: W1127 17:20:57.339223 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef16aa33_7753_41e0_b78f_533ea2f2dd76.slice/crio-a91eefcf5660fde7811c095e5730aa0bc008e6a81f3cf70ee1045b97251b0eb4 WatchSource:0}: Error finding container a91eefcf5660fde7811c095e5730aa0bc008e6a81f3cf70ee1045b97251b0eb4: Status 404 returned error can't find the container with id a91eefcf5660fde7811c095e5730aa0bc008e6a81f3cf70ee1045b97251b0eb4 Nov 27 17:20:57 crc kubenswrapper[4792]: I1127 17:20:57.343091 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-zk24k"] Nov 27 17:20:57 crc kubenswrapper[4792]: I1127 17:20:57.408262 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-r7tdl"] Nov 27 17:20:57 crc kubenswrapper[4792]: W1127 17:20:57.409959 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6ba8d56_25bc_4fa4_bbeb_8412cf566d8f.slice/crio-20cb968902d910024307116543b4970bbe52904592a2e1b8d73d55d2f97a4bda WatchSource:0}: Error finding container 20cb968902d910024307116543b4970bbe52904592a2e1b8d73d55d2f97a4bda: Status 404 returned error can't find the container with id 20cb968902d910024307116543b4970bbe52904592a2e1b8d73d55d2f97a4bda Nov 27 17:20:57 crc kubenswrapper[4792]: I1127 17:20:57.464136 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-l4gfl"] Nov 27 17:20:57 crc kubenswrapper[4792]: W1127 17:20:57.468206 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d4b582d_f8e6_477c_be1e_36f53bbc52e5.slice/crio-d1829a4790331e9a964a0aace7ea5f8cbae8288e493967604a7cfcaf4ba0e9d4 WatchSource:0}: Error finding container d1829a4790331e9a964a0aace7ea5f8cbae8288e493967604a7cfcaf4ba0e9d4: Status 404 returned error can't find the container with id d1829a4790331e9a964a0aace7ea5f8cbae8288e493967604a7cfcaf4ba0e9d4 Nov 27 17:20:57 crc kubenswrapper[4792]: I1127 17:20:57.486808 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-zk24k" event={"ID":"ef16aa33-7753-41e0-b78f-533ea2f2dd76","Type":"ContainerStarted","Data":"a91eefcf5660fde7811c095e5730aa0bc008e6a81f3cf70ee1045b97251b0eb4"} Nov 27 17:20:57 crc kubenswrapper[4792]: I1127 17:20:57.488197 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-r7tdl" event={"ID":"d6ba8d56-25bc-4fa4-bbeb-8412cf566d8f","Type":"ContainerStarted","Data":"20cb968902d910024307116543b4970bbe52904592a2e1b8d73d55d2f97a4bda"} Nov 27 17:20:57 crc kubenswrapper[4792]: I1127 17:20:57.489213 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-l4gfl" event={"ID":"3d4b582d-f8e6-477c-be1e-36f53bbc52e5","Type":"ContainerStarted","Data":"d1829a4790331e9a964a0aace7ea5f8cbae8288e493967604a7cfcaf4ba0e9d4"} Nov 27 17:21:03 crc kubenswrapper[4792]: I1127 17:21:03.527403 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-zk24k" event={"ID":"ef16aa33-7753-41e0-b78f-533ea2f2dd76","Type":"ContainerStarted","Data":"85c462ba401ac98bf0382d75da8313d7e23d5124e3d022989cb0fc92156fdcfa"} Nov 27 17:21:03 crc kubenswrapper[4792]: I1127 17:21:03.528959 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-r7tdl" event={"ID":"d6ba8d56-25bc-4fa4-bbeb-8412cf566d8f","Type":"ContainerStarted","Data":"f5d60ea757ea4a944444a50dc836bb8a89f955dd4d1c018b5575ec87f0bc8899"} Nov 27 17:21:03 crc kubenswrapper[4792]: I1127 17:21:03.530813 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-l4gfl" event={"ID":"3d4b582d-f8e6-477c-be1e-36f53bbc52e5","Type":"ContainerStarted","Data":"8b4a8e73b27361e28d4d349d8d4fcd38665e7473463c88d33b2e3fe6be862ee9"} Nov 27 17:21:03 crc kubenswrapper[4792]: I1127 17:21:03.530942 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-l4gfl" Nov 27 17:21:03 crc kubenswrapper[4792]: I1127 17:21:03.546339 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-zk24k" podStartSLOduration=1.8660688539999999 podStartE2EDuration="7.546316769s" podCreationTimestamp="2025-11-27 17:20:56 +0000 UTC" firstStartedPulling="2025-11-27 17:20:57.341722613 +0000 UTC m=+679.684548931" lastFinishedPulling="2025-11-27 17:21:03.021970528 +0000 UTC m=+685.364796846" observedRunningTime="2025-11-27 17:21:03.542480851 +0000 UTC m=+685.885307179" watchObservedRunningTime="2025-11-27 17:21:03.546316769 +0000 UTC m=+685.889143087" Nov 27 17:21:03 crc kubenswrapper[4792]: I1127 17:21:03.586852 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-r7tdl" podStartSLOduration=1.909562106 podStartE2EDuration="7.586837196s" podCreationTimestamp="2025-11-27 17:20:56 +0000 UTC" firstStartedPulling="2025-11-27 17:20:57.412421365 +0000 UTC m=+679.755247683" lastFinishedPulling="2025-11-27 17:21:03.089696455 +0000 UTC m=+685.432522773" observedRunningTime="2025-11-27 17:21:03.58384817 +0000 UTC m=+685.926674488" watchObservedRunningTime="2025-11-27 17:21:03.586837196 +0000 UTC m=+685.929663514" Nov 27 17:21:03 crc kubenswrapper[4792]: I1127 17:21:03.618063 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-l4gfl" podStartSLOduration=1.94011192 podStartE2EDuration="7.618045687s" podCreationTimestamp="2025-11-27 17:20:56 +0000 UTC" firstStartedPulling="2025-11-27 17:20:57.478572021 +0000 UTC m=+679.821398339" lastFinishedPulling="2025-11-27 17:21:03.156505788 +0000 UTC m=+685.499332106" observedRunningTime="2025-11-27 17:21:03.613880581 +0000 UTC m=+685.956706899" watchObservedRunningTime="2025-11-27 17:21:03.618045687 +0000 UTC m=+685.960872005" Nov 27 17:21:11 crc kubenswrapper[4792]: I1127 17:21:11.976506 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-l4gfl" Nov 27 17:21:38 crc kubenswrapper[4792]: I1127 17:21:38.290419 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:21:38 crc kubenswrapper[4792]: I1127 17:21:38.290997 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.715551 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l"] Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.717265 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.725605 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l"] Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.725634 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8bf2b066-d894-4413-af14-7e9e03e8d619-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l\" (UID: \"8bf2b066-d894-4413-af14-7e9e03e8d619\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.725812 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8bf2b066-d894-4413-af14-7e9e03e8d619-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l\" (UID: \"8bf2b066-d894-4413-af14-7e9e03e8d619\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.725847 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dszn6\" (UniqueName: \"kubernetes.io/projected/8bf2b066-d894-4413-af14-7e9e03e8d619-kube-api-access-dszn6\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l\" (UID: \"8bf2b066-d894-4413-af14-7e9e03e8d619\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.729163 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.828067 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8bf2b066-d894-4413-af14-7e9e03e8d619-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l\" (UID: \"8bf2b066-d894-4413-af14-7e9e03e8d619\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.828143 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dszn6\" (UniqueName: \"kubernetes.io/projected/8bf2b066-d894-4413-af14-7e9e03e8d619-kube-api-access-dszn6\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l\" (UID: \"8bf2b066-d894-4413-af14-7e9e03e8d619\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.828225 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8bf2b066-d894-4413-af14-7e9e03e8d619-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l\" (UID: \"8bf2b066-d894-4413-af14-7e9e03e8d619\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.828704 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8bf2b066-d894-4413-af14-7e9e03e8d619-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l\" (UID: \"8bf2b066-d894-4413-af14-7e9e03e8d619\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.828760 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8bf2b066-d894-4413-af14-7e9e03e8d619-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l\" (UID: \"8bf2b066-d894-4413-af14-7e9e03e8d619\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.858795 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dszn6\" (UniqueName: \"kubernetes.io/projected/8bf2b066-d894-4413-af14-7e9e03e8d619-kube-api-access-dszn6\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l\" (UID: \"8bf2b066-d894-4413-af14-7e9e03e8d619\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.961636 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k"] Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.963137 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" Nov 27 17:21:39 crc kubenswrapper[4792]: I1127 17:21:39.971197 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k"] Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.030749 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6d145eb-d07b-4d67-955e-4a85351799d3-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k\" (UID: \"e6d145eb-d07b-4d67-955e-4a85351799d3\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.030931 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6d145eb-d07b-4d67-955e-4a85351799d3-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k\" (UID: \"e6d145eb-d07b-4d67-955e-4a85351799d3\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.030995 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqkdp\" (UniqueName: \"kubernetes.io/projected/e6d145eb-d07b-4d67-955e-4a85351799d3-kube-api-access-kqkdp\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k\" (UID: \"e6d145eb-d07b-4d67-955e-4a85351799d3\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.087948 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.132338 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6d145eb-d07b-4d67-955e-4a85351799d3-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k\" (UID: \"e6d145eb-d07b-4d67-955e-4a85351799d3\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.132402 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqkdp\" (UniqueName: \"kubernetes.io/projected/e6d145eb-d07b-4d67-955e-4a85351799d3-kube-api-access-kqkdp\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k\" (UID: \"e6d145eb-d07b-4d67-955e-4a85351799d3\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.132464 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6d145eb-d07b-4d67-955e-4a85351799d3-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k\" (UID: \"e6d145eb-d07b-4d67-955e-4a85351799d3\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.133100 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6d145eb-d07b-4d67-955e-4a85351799d3-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k\" (UID: \"e6d145eb-d07b-4d67-955e-4a85351799d3\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.133172 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6d145eb-d07b-4d67-955e-4a85351799d3-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k\" (UID: \"e6d145eb-d07b-4d67-955e-4a85351799d3\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.155217 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqkdp\" (UniqueName: \"kubernetes.io/projected/e6d145eb-d07b-4d67-955e-4a85351799d3-kube-api-access-kqkdp\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k\" (UID: \"e6d145eb-d07b-4d67-955e-4a85351799d3\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.287180 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.343865 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l"] Nov 27 17:21:40 crc kubenswrapper[4792]: W1127 17:21:40.354181 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bf2b066_d894_4413_af14_7e9e03e8d619.slice/crio-b7a92153e4b25aa240159e075dd21743eb18054066b4194d27ae567651fd155f WatchSource:0}: Error finding container b7a92153e4b25aa240159e075dd21743eb18054066b4194d27ae567651fd155f: Status 404 returned error can't find the container with id b7a92153e4b25aa240159e075dd21743eb18054066b4194d27ae567651fd155f Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.584547 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k"] Nov 27 17:21:40 crc kubenswrapper[4792]: W1127 17:21:40.588323 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6d145eb_d07b_4d67_955e_4a85351799d3.slice/crio-df8c736e9ebe6309636dbc1f7ba93ff903b6374b8015e298a3f6dec75c95f96c WatchSource:0}: Error finding container df8c736e9ebe6309636dbc1f7ba93ff903b6374b8015e298a3f6dec75c95f96c: Status 404 returned error can't find the container with id df8c736e9ebe6309636dbc1f7ba93ff903b6374b8015e298a3f6dec75c95f96c Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.820800 4792 generic.go:334] "Generic (PLEG): container finished" podID="e6d145eb-d07b-4d67-955e-4a85351799d3" containerID="a370e4de6153f10d199693b52741ea975e0b6191a915077846e40e402b92385f" exitCode=0 Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.820847 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" event={"ID":"e6d145eb-d07b-4d67-955e-4a85351799d3","Type":"ContainerDied","Data":"a370e4de6153f10d199693b52741ea975e0b6191a915077846e40e402b92385f"} Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.820892 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" event={"ID":"e6d145eb-d07b-4d67-955e-4a85351799d3","Type":"ContainerStarted","Data":"df8c736e9ebe6309636dbc1f7ba93ff903b6374b8015e298a3f6dec75c95f96c"} Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.822640 4792 generic.go:334] "Generic (PLEG): container finished" podID="8bf2b066-d894-4413-af14-7e9e03e8d619" containerID="7137a81be79b1ebe8ee5a356e9f13b8d6fb337ccc8944953fa8e87d66b3e3982" exitCode=0 Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.822722 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" event={"ID":"8bf2b066-d894-4413-af14-7e9e03e8d619","Type":"ContainerDied","Data":"7137a81be79b1ebe8ee5a356e9f13b8d6fb337ccc8944953fa8e87d66b3e3982"} Nov 27 17:21:40 crc kubenswrapper[4792]: I1127 17:21:40.822829 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" event={"ID":"8bf2b066-d894-4413-af14-7e9e03e8d619","Type":"ContainerStarted","Data":"b7a92153e4b25aa240159e075dd21743eb18054066b4194d27ae567651fd155f"} Nov 27 17:21:42 crc kubenswrapper[4792]: I1127 17:21:42.838566 4792 generic.go:334] "Generic (PLEG): container finished" podID="e6d145eb-d07b-4d67-955e-4a85351799d3" containerID="9caecce454cebb3b77386f25a60c6ce4f3526275eb9eff853b38ca3c35477784" exitCode=0 Nov 27 17:21:42 crc kubenswrapper[4792]: I1127 17:21:42.838897 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" event={"ID":"e6d145eb-d07b-4d67-955e-4a85351799d3","Type":"ContainerDied","Data":"9caecce454cebb3b77386f25a60c6ce4f3526275eb9eff853b38ca3c35477784"} Nov 27 17:21:43 crc kubenswrapper[4792]: I1127 17:21:43.848836 4792 generic.go:334] "Generic (PLEG): container finished" podID="e6d145eb-d07b-4d67-955e-4a85351799d3" containerID="7f02b2c5bc3df22f4060719c2e0f6191b5dcacde749f72359efa759e41e2d74a" exitCode=0 Nov 27 17:21:43 crc kubenswrapper[4792]: I1127 17:21:43.848932 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" event={"ID":"e6d145eb-d07b-4d67-955e-4a85351799d3","Type":"ContainerDied","Data":"7f02b2c5bc3df22f4060719c2e0f6191b5dcacde749f72359efa759e41e2d74a"} Nov 27 17:21:43 crc kubenswrapper[4792]: I1127 17:21:43.850505 4792 generic.go:334] "Generic (PLEG): container finished" podID="8bf2b066-d894-4413-af14-7e9e03e8d619" containerID="595671617b442dbd4f3f47c1ad19d4b471b1a497d7574bec66f7a52c9f7989d2" exitCode=0 Nov 27 17:21:43 crc kubenswrapper[4792]: I1127 17:21:43.850535 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" event={"ID":"8bf2b066-d894-4413-af14-7e9e03e8d619","Type":"ContainerDied","Data":"595671617b442dbd4f3f47c1ad19d4b471b1a497d7574bec66f7a52c9f7989d2"} Nov 27 17:21:44 crc kubenswrapper[4792]: I1127 17:21:44.859618 4792 generic.go:334] "Generic (PLEG): container finished" podID="8bf2b066-d894-4413-af14-7e9e03e8d619" containerID="00e630d6f580aa669a01630e1965e3cbaa3572411b3e5692893b17b3e80bcdd6" exitCode=0 Nov 27 17:21:44 crc kubenswrapper[4792]: I1127 17:21:44.860592 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" event={"ID":"8bf2b066-d894-4413-af14-7e9e03e8d619","Type":"ContainerDied","Data":"00e630d6f580aa669a01630e1965e3cbaa3572411b3e5692893b17b3e80bcdd6"} Nov 27 17:21:45 crc kubenswrapper[4792]: I1127 17:21:45.138883 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" Nov 27 17:21:45 crc kubenswrapper[4792]: I1127 17:21:45.303463 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6d145eb-d07b-4d67-955e-4a85351799d3-util\") pod \"e6d145eb-d07b-4d67-955e-4a85351799d3\" (UID: \"e6d145eb-d07b-4d67-955e-4a85351799d3\") " Nov 27 17:21:45 crc kubenswrapper[4792]: I1127 17:21:45.303570 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6d145eb-d07b-4d67-955e-4a85351799d3-bundle\") pod \"e6d145eb-d07b-4d67-955e-4a85351799d3\" (UID: \"e6d145eb-d07b-4d67-955e-4a85351799d3\") " Nov 27 17:21:45 crc kubenswrapper[4792]: I1127 17:21:45.303738 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqkdp\" (UniqueName: \"kubernetes.io/projected/e6d145eb-d07b-4d67-955e-4a85351799d3-kube-api-access-kqkdp\") pod \"e6d145eb-d07b-4d67-955e-4a85351799d3\" (UID: \"e6d145eb-d07b-4d67-955e-4a85351799d3\") " Nov 27 17:21:45 crc kubenswrapper[4792]: I1127 17:21:45.304412 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d145eb-d07b-4d67-955e-4a85351799d3-bundle" (OuterVolumeSpecName: "bundle") pod "e6d145eb-d07b-4d67-955e-4a85351799d3" (UID: "e6d145eb-d07b-4d67-955e-4a85351799d3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:21:45 crc kubenswrapper[4792]: I1127 17:21:45.312978 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d145eb-d07b-4d67-955e-4a85351799d3-kube-api-access-kqkdp" (OuterVolumeSpecName: "kube-api-access-kqkdp") pod "e6d145eb-d07b-4d67-955e-4a85351799d3" (UID: "e6d145eb-d07b-4d67-955e-4a85351799d3"). InnerVolumeSpecName "kube-api-access-kqkdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:21:45 crc kubenswrapper[4792]: I1127 17:21:45.316831 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d145eb-d07b-4d67-955e-4a85351799d3-util" (OuterVolumeSpecName: "util") pod "e6d145eb-d07b-4d67-955e-4a85351799d3" (UID: "e6d145eb-d07b-4d67-955e-4a85351799d3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:21:45 crc kubenswrapper[4792]: I1127 17:21:45.406101 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6d145eb-d07b-4d67-955e-4a85351799d3-util\") on node \"crc\" DevicePath \"\"" Nov 27 17:21:45 crc kubenswrapper[4792]: I1127 17:21:45.406370 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6d145eb-d07b-4d67-955e-4a85351799d3-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:21:45 crc kubenswrapper[4792]: I1127 17:21:45.406380 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqkdp\" (UniqueName: \"kubernetes.io/projected/e6d145eb-d07b-4d67-955e-4a85351799d3-kube-api-access-kqkdp\") on node \"crc\" DevicePath \"\"" Nov 27 17:21:45 crc kubenswrapper[4792]: I1127 17:21:45.875798 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" Nov 27 17:21:45 crc kubenswrapper[4792]: I1127 17:21:45.875853 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k" event={"ID":"e6d145eb-d07b-4d67-955e-4a85351799d3","Type":"ContainerDied","Data":"df8c736e9ebe6309636dbc1f7ba93ff903b6374b8015e298a3f6dec75c95f96c"} Nov 27 17:21:45 crc kubenswrapper[4792]: I1127 17:21:45.875882 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df8c736e9ebe6309636dbc1f7ba93ff903b6374b8015e298a3f6dec75c95f96c" Nov 27 17:21:46 crc kubenswrapper[4792]: I1127 17:21:46.159391 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" Nov 27 17:21:46 crc kubenswrapper[4792]: I1127 17:21:46.320238 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8bf2b066-d894-4413-af14-7e9e03e8d619-util\") pod \"8bf2b066-d894-4413-af14-7e9e03e8d619\" (UID: \"8bf2b066-d894-4413-af14-7e9e03e8d619\") " Nov 27 17:21:46 crc kubenswrapper[4792]: I1127 17:21:46.320360 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8bf2b066-d894-4413-af14-7e9e03e8d619-bundle\") pod \"8bf2b066-d894-4413-af14-7e9e03e8d619\" (UID: \"8bf2b066-d894-4413-af14-7e9e03e8d619\") " Nov 27 17:21:46 crc kubenswrapper[4792]: I1127 17:21:46.320415 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dszn6\" (UniqueName: \"kubernetes.io/projected/8bf2b066-d894-4413-af14-7e9e03e8d619-kube-api-access-dszn6\") pod \"8bf2b066-d894-4413-af14-7e9e03e8d619\" (UID: \"8bf2b066-d894-4413-af14-7e9e03e8d619\") " Nov 27 17:21:46 crc kubenswrapper[4792]: I1127 17:21:46.321112 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bf2b066-d894-4413-af14-7e9e03e8d619-bundle" (OuterVolumeSpecName: "bundle") pod "8bf2b066-d894-4413-af14-7e9e03e8d619" (UID: "8bf2b066-d894-4413-af14-7e9e03e8d619"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:21:46 crc kubenswrapper[4792]: I1127 17:21:46.326800 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf2b066-d894-4413-af14-7e9e03e8d619-kube-api-access-dszn6" (OuterVolumeSpecName: "kube-api-access-dszn6") pod "8bf2b066-d894-4413-af14-7e9e03e8d619" (UID: "8bf2b066-d894-4413-af14-7e9e03e8d619"). InnerVolumeSpecName "kube-api-access-dszn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:21:46 crc kubenswrapper[4792]: I1127 17:21:46.331395 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bf2b066-d894-4413-af14-7e9e03e8d619-util" (OuterVolumeSpecName: "util") pod "8bf2b066-d894-4413-af14-7e9e03e8d619" (UID: "8bf2b066-d894-4413-af14-7e9e03e8d619"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:21:46 crc kubenswrapper[4792]: I1127 17:21:46.422469 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8bf2b066-d894-4413-af14-7e9e03e8d619-util\") on node \"crc\" DevicePath \"\"" Nov 27 17:21:46 crc kubenswrapper[4792]: I1127 17:21:46.422517 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8bf2b066-d894-4413-af14-7e9e03e8d619-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:21:46 crc kubenswrapper[4792]: I1127 17:21:46.422533 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dszn6\" (UniqueName: \"kubernetes.io/projected/8bf2b066-d894-4413-af14-7e9e03e8d619-kube-api-access-dszn6\") on node \"crc\" DevicePath \"\"" Nov 27 17:21:46 crc kubenswrapper[4792]: I1127 17:21:46.883346 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" event={"ID":"8bf2b066-d894-4413-af14-7e9e03e8d619","Type":"ContainerDied","Data":"b7a92153e4b25aa240159e075dd21743eb18054066b4194d27ae567651fd155f"} Nov 27 17:21:46 crc kubenswrapper[4792]: I1127 17:21:46.883381 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7a92153e4b25aa240159e075dd21743eb18054066b4194d27ae567651fd155f" Nov 27 17:21:46 crc kubenswrapper[4792]: I1127 17:21:46.883438 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.199608 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-g2l98"] Nov 27 17:21:49 crc kubenswrapper[4792]: E1127 17:21:49.199923 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d145eb-d07b-4d67-955e-4a85351799d3" containerName="pull" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.199939 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d145eb-d07b-4d67-955e-4a85351799d3" containerName="pull" Nov 27 17:21:49 crc kubenswrapper[4792]: E1127 17:21:49.199955 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d145eb-d07b-4d67-955e-4a85351799d3" containerName="extract" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.199963 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d145eb-d07b-4d67-955e-4a85351799d3" containerName="extract" Nov 27 17:21:49 crc kubenswrapper[4792]: E1127 17:21:49.199977 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf2b066-d894-4413-af14-7e9e03e8d619" containerName="pull" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.199985 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf2b066-d894-4413-af14-7e9e03e8d619" containerName="pull" Nov 27 17:21:49 crc kubenswrapper[4792]: E1127 17:21:49.199997 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf2b066-d894-4413-af14-7e9e03e8d619" containerName="extract" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.200004 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf2b066-d894-4413-af14-7e9e03e8d619" containerName="extract" Nov 27 17:21:49 crc kubenswrapper[4792]: E1127 17:21:49.200020 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf2b066-d894-4413-af14-7e9e03e8d619" containerName="util" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.200027 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf2b066-d894-4413-af14-7e9e03e8d619" containerName="util" Nov 27 17:21:49 crc kubenswrapper[4792]: E1127 17:21:49.200045 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d145eb-d07b-4d67-955e-4a85351799d3" containerName="util" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.200053 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d145eb-d07b-4d67-955e-4a85351799d3" containerName="util" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.200185 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d145eb-d07b-4d67-955e-4a85351799d3" containerName="extract" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.200197 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf2b066-d894-4413-af14-7e9e03e8d619" containerName="extract" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.200753 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-g2l98" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.202553 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-6528c" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.202817 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.203084 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.215435 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-g2l98"] Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.284558 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls8dm\" (UniqueName: \"kubernetes.io/projected/286d6b8b-ff31-4c5c-84cf-9ec7bdece2a0-kube-api-access-ls8dm\") pod \"cluster-logging-operator-ff9846bd-g2l98\" (UID: \"286d6b8b-ff31-4c5c-84cf-9ec7bdece2a0\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-g2l98" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.386113 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls8dm\" (UniqueName: \"kubernetes.io/projected/286d6b8b-ff31-4c5c-84cf-9ec7bdece2a0-kube-api-access-ls8dm\") pod \"cluster-logging-operator-ff9846bd-g2l98\" (UID: \"286d6b8b-ff31-4c5c-84cf-9ec7bdece2a0\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-g2l98" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.403351 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls8dm\" (UniqueName: \"kubernetes.io/projected/286d6b8b-ff31-4c5c-84cf-9ec7bdece2a0-kube-api-access-ls8dm\") pod \"cluster-logging-operator-ff9846bd-g2l98\" (UID: \"286d6b8b-ff31-4c5c-84cf-9ec7bdece2a0\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-g2l98" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.516378 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-g2l98" Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.767800 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-g2l98"] Nov 27 17:21:49 crc kubenswrapper[4792]: I1127 17:21:49.901456 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-g2l98" event={"ID":"286d6b8b-ff31-4c5c-84cf-9ec7bdece2a0","Type":"ContainerStarted","Data":"80f34cba4b5c84440f5f58db99221c9d20dfb21880b9614fd4852de2fc61377b"} Nov 27 17:21:55 crc kubenswrapper[4792]: I1127 17:21:55.945335 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-g2l98" event={"ID":"286d6b8b-ff31-4c5c-84cf-9ec7bdece2a0","Type":"ContainerStarted","Data":"d544bafefddac04a5296c6b4e0b3d4c4e8f38985c2ab871635fcd84ee68385b3"} Nov 27 17:21:55 crc kubenswrapper[4792]: I1127 17:21:55.961747 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-ff9846bd-g2l98" podStartSLOduration=1.314675595 podStartE2EDuration="6.961725846s" podCreationTimestamp="2025-11-27 17:21:49 +0000 UTC" firstStartedPulling="2025-11-27 17:21:49.775440545 +0000 UTC m=+732.118266873" lastFinishedPulling="2025-11-27 17:21:55.422490806 +0000 UTC m=+737.765317124" observedRunningTime="2025-11-27 17:21:55.958821773 +0000 UTC m=+738.301648101" watchObservedRunningTime="2025-11-27 17:21:55.961725846 +0000 UTC m=+738.304552164" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.146882 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj"] Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.148873 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.151572 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.152047 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.152213 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.152535 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.152663 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.152760 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-6rkn6" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.164626 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj"] Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.329906 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01e17fe3-0b99-4719-8a19-bdb45dabeaac-webhook-cert\") pod \"loki-operator-controller-manager-5994f6989f-4s6cj\" (UID: \"01e17fe3-0b99-4719-8a19-bdb45dabeaac\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.329983 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/01e17fe3-0b99-4719-8a19-bdb45dabeaac-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5994f6989f-4s6cj\" (UID: \"01e17fe3-0b99-4719-8a19-bdb45dabeaac\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.330098 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/01e17fe3-0b99-4719-8a19-bdb45dabeaac-manager-config\") pod \"loki-operator-controller-manager-5994f6989f-4s6cj\" (UID: \"01e17fe3-0b99-4719-8a19-bdb45dabeaac\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.330328 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jzrz\" (UniqueName: \"kubernetes.io/projected/01e17fe3-0b99-4719-8a19-bdb45dabeaac-kube-api-access-9jzrz\") pod \"loki-operator-controller-manager-5994f6989f-4s6cj\" (UID: \"01e17fe3-0b99-4719-8a19-bdb45dabeaac\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.330411 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01e17fe3-0b99-4719-8a19-bdb45dabeaac-apiservice-cert\") pod \"loki-operator-controller-manager-5994f6989f-4s6cj\" (UID: \"01e17fe3-0b99-4719-8a19-bdb45dabeaac\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.431289 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/01e17fe3-0b99-4719-8a19-bdb45dabeaac-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5994f6989f-4s6cj\" (UID: \"01e17fe3-0b99-4719-8a19-bdb45dabeaac\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.431355 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/01e17fe3-0b99-4719-8a19-bdb45dabeaac-manager-config\") pod \"loki-operator-controller-manager-5994f6989f-4s6cj\" (UID: \"01e17fe3-0b99-4719-8a19-bdb45dabeaac\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.431450 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jzrz\" (UniqueName: \"kubernetes.io/projected/01e17fe3-0b99-4719-8a19-bdb45dabeaac-kube-api-access-9jzrz\") pod \"loki-operator-controller-manager-5994f6989f-4s6cj\" (UID: \"01e17fe3-0b99-4719-8a19-bdb45dabeaac\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.431483 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01e17fe3-0b99-4719-8a19-bdb45dabeaac-apiservice-cert\") pod \"loki-operator-controller-manager-5994f6989f-4s6cj\" (UID: \"01e17fe3-0b99-4719-8a19-bdb45dabeaac\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.431517 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01e17fe3-0b99-4719-8a19-bdb45dabeaac-webhook-cert\") pod \"loki-operator-controller-manager-5994f6989f-4s6cj\" (UID: \"01e17fe3-0b99-4719-8a19-bdb45dabeaac\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.432349 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/01e17fe3-0b99-4719-8a19-bdb45dabeaac-manager-config\") pod \"loki-operator-controller-manager-5994f6989f-4s6cj\" (UID: \"01e17fe3-0b99-4719-8a19-bdb45dabeaac\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.437350 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/01e17fe3-0b99-4719-8a19-bdb45dabeaac-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5994f6989f-4s6cj\" (UID: \"01e17fe3-0b99-4719-8a19-bdb45dabeaac\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.440986 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01e17fe3-0b99-4719-8a19-bdb45dabeaac-webhook-cert\") pod \"loki-operator-controller-manager-5994f6989f-4s6cj\" (UID: \"01e17fe3-0b99-4719-8a19-bdb45dabeaac\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.441196 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01e17fe3-0b99-4719-8a19-bdb45dabeaac-apiservice-cert\") pod \"loki-operator-controller-manager-5994f6989f-4s6cj\" (UID: \"01e17fe3-0b99-4719-8a19-bdb45dabeaac\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.459467 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jzrz\" (UniqueName: \"kubernetes.io/projected/01e17fe3-0b99-4719-8a19-bdb45dabeaac-kube-api-access-9jzrz\") pod \"loki-operator-controller-manager-5994f6989f-4s6cj\" (UID: \"01e17fe3-0b99-4719-8a19-bdb45dabeaac\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.463939 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:02 crc kubenswrapper[4792]: I1127 17:22:02.973419 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj"] Nov 27 17:22:03 crc kubenswrapper[4792]: I1127 17:22:03.996557 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" event={"ID":"01e17fe3-0b99-4719-8a19-bdb45dabeaac","Type":"ContainerStarted","Data":"278dc8ba279ecb7dde93cc4bf8d6ce3a7ae03d8b1b2bd6817b9219f90b17ec99"} Nov 27 17:22:08 crc kubenswrapper[4792]: I1127 17:22:08.030805 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" event={"ID":"01e17fe3-0b99-4719-8a19-bdb45dabeaac","Type":"ContainerStarted","Data":"e329ffb981c0835d9808a64bd2b5d530ac28d8e2c45070d3c7465505b69b6a32"} Nov 27 17:22:08 crc kubenswrapper[4792]: I1127 17:22:08.290600 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:22:08 crc kubenswrapper[4792]: I1127 17:22:08.290772 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:22:11 crc kubenswrapper[4792]: I1127 17:22:11.023479 4792 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 27 17:22:19 crc kubenswrapper[4792]: I1127 17:22:19.105173 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" event={"ID":"01e17fe3-0b99-4719-8a19-bdb45dabeaac","Type":"ContainerStarted","Data":"de78d556eaf2b4ac3d96d9a7da66fa0e3eacbf51d1bbe3c66862e0484450953a"} Nov 27 17:22:19 crc kubenswrapper[4792]: I1127 17:22:19.108040 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:19 crc kubenswrapper[4792]: I1127 17:22:19.109959 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" Nov 27 17:22:19 crc kubenswrapper[4792]: I1127 17:22:19.138546 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5994f6989f-4s6cj" podStartSLOduration=2.131113601 podStartE2EDuration="17.138531566s" podCreationTimestamp="2025-11-27 17:22:02 +0000 UTC" firstStartedPulling="2025-11-27 17:22:02.986813766 +0000 UTC m=+745.329640084" lastFinishedPulling="2025-11-27 17:22:17.994231731 +0000 UTC m=+760.337058049" observedRunningTime="2025-11-27 17:22:19.13353142 +0000 UTC m=+761.476357738" watchObservedRunningTime="2025-11-27 17:22:19.138531566 +0000 UTC m=+761.481357884" Nov 27 17:22:22 crc kubenswrapper[4792]: I1127 17:22:22.517432 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Nov 27 17:22:22 crc kubenswrapper[4792]: I1127 17:22:22.518662 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 27 17:22:22 crc kubenswrapper[4792]: I1127 17:22:22.520529 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Nov 27 17:22:22 crc kubenswrapper[4792]: I1127 17:22:22.520799 4792 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-gmlwn" Nov 27 17:22:22 crc kubenswrapper[4792]: I1127 17:22:22.520838 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Nov 27 17:22:22 crc kubenswrapper[4792]: I1127 17:22:22.528622 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 27 17:22:22 crc kubenswrapper[4792]: I1127 17:22:22.632422 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ce2a3e13-862f-4586-929e-1b9d78d8f754\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce2a3e13-862f-4586-929e-1b9d78d8f754\") pod \"minio\" (UID: \"5de19a22-f758-42ed-bb2f-47cf02e788b6\") " pod="minio-dev/minio" Nov 27 17:22:22 crc kubenswrapper[4792]: I1127 17:22:22.632515 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghrxv\" (UniqueName: \"kubernetes.io/projected/5de19a22-f758-42ed-bb2f-47cf02e788b6-kube-api-access-ghrxv\") pod \"minio\" (UID: \"5de19a22-f758-42ed-bb2f-47cf02e788b6\") " pod="minio-dev/minio" Nov 27 17:22:22 crc kubenswrapper[4792]: I1127 17:22:22.734257 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ce2a3e13-862f-4586-929e-1b9d78d8f754\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce2a3e13-862f-4586-929e-1b9d78d8f754\") pod \"minio\" (UID: \"5de19a22-f758-42ed-bb2f-47cf02e788b6\") " pod="minio-dev/minio" Nov 27 17:22:22 crc kubenswrapper[4792]: I1127 17:22:22.734321 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghrxv\" (UniqueName: \"kubernetes.io/projected/5de19a22-f758-42ed-bb2f-47cf02e788b6-kube-api-access-ghrxv\") pod \"minio\" (UID: \"5de19a22-f758-42ed-bb2f-47cf02e788b6\") " pod="minio-dev/minio" Nov 27 17:22:22 crc kubenswrapper[4792]: I1127 17:22:22.737498 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 27 17:22:22 crc kubenswrapper[4792]: I1127 17:22:22.737536 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ce2a3e13-862f-4586-929e-1b9d78d8f754\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce2a3e13-862f-4586-929e-1b9d78d8f754\") pod \"minio\" (UID: \"5de19a22-f758-42ed-bb2f-47cf02e788b6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/df2e1162351dfd56fa55cc30de2466b863d3dfac15d7174bb187f650b06c918f/globalmount\"" pod="minio-dev/minio" Nov 27 17:22:22 crc kubenswrapper[4792]: I1127 17:22:22.756581 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghrxv\" (UniqueName: \"kubernetes.io/projected/5de19a22-f758-42ed-bb2f-47cf02e788b6-kube-api-access-ghrxv\") pod \"minio\" (UID: \"5de19a22-f758-42ed-bb2f-47cf02e788b6\") " pod="minio-dev/minio" Nov 27 17:22:22 crc kubenswrapper[4792]: I1127 17:22:22.769473 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ce2a3e13-862f-4586-929e-1b9d78d8f754\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ce2a3e13-862f-4586-929e-1b9d78d8f754\") pod \"minio\" (UID: \"5de19a22-f758-42ed-bb2f-47cf02e788b6\") " pod="minio-dev/minio" Nov 27 17:22:22 crc kubenswrapper[4792]: I1127 17:22:22.882813 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 27 17:22:23 crc kubenswrapper[4792]: I1127 17:22:23.322288 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 27 17:22:24 crc kubenswrapper[4792]: I1127 17:22:24.143340 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"5de19a22-f758-42ed-bb2f-47cf02e788b6","Type":"ContainerStarted","Data":"b7fd64989f2327d02241121a10890e7a8c731616308d6cbc07aee85f0ba9d695"} Nov 27 17:22:27 crc kubenswrapper[4792]: I1127 17:22:27.164160 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"5de19a22-f758-42ed-bb2f-47cf02e788b6","Type":"ContainerStarted","Data":"47ca46b3ba6c651c7b359a42a3e686d90b0b37dc60520e90e87c5fdfa599ca64"} Nov 27 17:22:27 crc kubenswrapper[4792]: I1127 17:22:27.182718 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.155014263 podStartE2EDuration="7.182690453s" podCreationTimestamp="2025-11-27 17:22:20 +0000 UTC" firstStartedPulling="2025-11-27 17:22:23.334995766 +0000 UTC m=+765.677822084" lastFinishedPulling="2025-11-27 17:22:26.362671966 +0000 UTC m=+768.705498274" observedRunningTime="2025-11-27 17:22:27.180231731 +0000 UTC m=+769.523058049" watchObservedRunningTime="2025-11-27 17:22:27.182690453 +0000 UTC m=+769.525516771" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.037303 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7"] Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.038624 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.041259 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.042045 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.042221 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.042356 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.042498 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-jts9j" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.079513 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7"] Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.153458 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/b883f630-7c31-4a1a-9633-8770b40c5a69-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-dgnv7\" (UID: \"b883f630-7c31-4a1a-9633-8770b40c5a69\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.153520 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b883f630-7c31-4a1a-9633-8770b40c5a69-config\") pod \"logging-loki-distributor-76cc67bf56-dgnv7\" (UID: \"b883f630-7c31-4a1a-9633-8770b40c5a69\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.153546 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b883f630-7c31-4a1a-9633-8770b40c5a69-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-dgnv7\" (UID: \"b883f630-7c31-4a1a-9633-8770b40c5a69\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.153622 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6k9k\" (UniqueName: \"kubernetes.io/projected/b883f630-7c31-4a1a-9633-8770b40c5a69-kube-api-access-j6k9k\") pod \"logging-loki-distributor-76cc67bf56-dgnv7\" (UID: \"b883f630-7c31-4a1a-9633-8770b40c5a69\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.153675 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/b883f630-7c31-4a1a-9633-8770b40c5a69-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-dgnv7\" (UID: \"b883f630-7c31-4a1a-9633-8770b40c5a69\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.214022 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-9rscz"] Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.214880 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.216900 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.217504 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.218035 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.228004 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-9rscz"] Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.255635 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/b883f630-7c31-4a1a-9633-8770b40c5a69-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-dgnv7\" (UID: \"b883f630-7c31-4a1a-9633-8770b40c5a69\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.255724 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b883f630-7c31-4a1a-9633-8770b40c5a69-config\") pod \"logging-loki-distributor-76cc67bf56-dgnv7\" (UID: \"b883f630-7c31-4a1a-9633-8770b40c5a69\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.255758 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b883f630-7c31-4a1a-9633-8770b40c5a69-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-dgnv7\" (UID: \"b883f630-7c31-4a1a-9633-8770b40c5a69\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.255835 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6k9k\" (UniqueName: \"kubernetes.io/projected/b883f630-7c31-4a1a-9633-8770b40c5a69-kube-api-access-j6k9k\") pod \"logging-loki-distributor-76cc67bf56-dgnv7\" (UID: \"b883f630-7c31-4a1a-9633-8770b40c5a69\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.255885 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/b883f630-7c31-4a1a-9633-8770b40c5a69-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-dgnv7\" (UID: \"b883f630-7c31-4a1a-9633-8770b40c5a69\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.259252 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b883f630-7c31-4a1a-9633-8770b40c5a69-config\") pod \"logging-loki-distributor-76cc67bf56-dgnv7\" (UID: \"b883f630-7c31-4a1a-9633-8770b40c5a69\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.259764 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b883f630-7c31-4a1a-9633-8770b40c5a69-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-dgnv7\" (UID: \"b883f630-7c31-4a1a-9633-8770b40c5a69\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.266485 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/b883f630-7c31-4a1a-9633-8770b40c5a69-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-dgnv7\" (UID: \"b883f630-7c31-4a1a-9633-8770b40c5a69\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.280002 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6k9k\" (UniqueName: \"kubernetes.io/projected/b883f630-7c31-4a1a-9633-8770b40c5a69-kube-api-access-j6k9k\") pod \"logging-loki-distributor-76cc67bf56-dgnv7\" (UID: \"b883f630-7c31-4a1a-9633-8770b40c5a69\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.285824 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln"] Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.286757 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.287301 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/b883f630-7c31-4a1a-9633-8770b40c5a69-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-dgnv7\" (UID: \"b883f630-7c31-4a1a-9633-8770b40c5a69\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.292553 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.292763 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.295435 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln"] Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.356780 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f70d890-772f-49eb-9c3b-0553bc2349ca-config\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.356844 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1828379a-4323-4161-881c-cf67367db9d4-config\") pod \"logging-loki-query-frontend-84558f7c9f-l69ln\" (UID: \"1828379a-4323-4161-881c-cf67367db9d4\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.356866 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/1828379a-4323-4161-881c-cf67367db9d4-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-l69ln\" (UID: \"1828379a-4323-4161-881c-cf67367db9d4\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.356890 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/8f70d890-772f-49eb-9c3b-0553bc2349ca-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.356919 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh9q2\" (UniqueName: \"kubernetes.io/projected/8f70d890-772f-49eb-9c3b-0553bc2349ca-kube-api-access-fh9q2\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.356946 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/1828379a-4323-4161-881c-cf67367db9d4-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-l69ln\" (UID: \"1828379a-4323-4161-881c-cf67367db9d4\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.356962 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8f70d890-772f-49eb-9c3b-0553bc2349ca-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.356982 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f70d890-772f-49eb-9c3b-0553bc2349ca-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.357011 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8f70d890-772f-49eb-9c3b-0553bc2349ca-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.357029 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1828379a-4323-4161-881c-cf67367db9d4-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-l69ln\" (UID: \"1828379a-4323-4161-881c-cf67367db9d4\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.357046 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz4vb\" (UniqueName: \"kubernetes.io/projected/1828379a-4323-4161-881c-cf67367db9d4-kube-api-access-lz4vb\") pod \"logging-loki-query-frontend-84558f7c9f-l69ln\" (UID: \"1828379a-4323-4161-881c-cf67367db9d4\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.387963 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.396229 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr"] Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.397456 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.404304 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.417912 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.418159 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.418275 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.418372 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.423577 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-767db5f6c6-tqz78"] Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.424830 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.430700 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-pkr5w" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.448864 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr"] Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.454727 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-767db5f6c6-tqz78"] Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458324 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1828379a-4323-4161-881c-cf67367db9d4-config\") pod \"logging-loki-query-frontend-84558f7c9f-l69ln\" (UID: \"1828379a-4323-4161-881c-cf67367db9d4\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458363 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/1828379a-4323-4161-881c-cf67367db9d4-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-l69ln\" (UID: \"1828379a-4323-4161-881c-cf67367db9d4\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458385 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/8f70d890-772f-49eb-9c3b-0553bc2349ca-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458406 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-rbac\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458435 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-lokistack-gateway\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458454 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh9q2\" (UniqueName: \"kubernetes.io/projected/8f70d890-772f-49eb-9c3b-0553bc2349ca-kube-api-access-fh9q2\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458480 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/1828379a-4323-4161-881c-cf67367db9d4-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-l69ln\" (UID: \"1828379a-4323-4161-881c-cf67367db9d4\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458496 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8f70d890-772f-49eb-9c3b-0553bc2349ca-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458513 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458529 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-tenants\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458550 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f70d890-772f-49eb-9c3b-0553bc2349ca-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458567 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458590 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4lkl\" (UniqueName: \"kubernetes.io/projected/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-kube-api-access-m4lkl\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458613 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8f70d890-772f-49eb-9c3b-0553bc2349ca-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458631 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458662 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1828379a-4323-4161-881c-cf67367db9d4-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-l69ln\" (UID: \"1828379a-4323-4161-881c-cf67367db9d4\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458679 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-tls-secret\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458698 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz4vb\" (UniqueName: \"kubernetes.io/projected/1828379a-4323-4161-881c-cf67367db9d4-kube-api-access-lz4vb\") pod \"logging-loki-query-frontend-84558f7c9f-l69ln\" (UID: \"1828379a-4323-4161-881c-cf67367db9d4\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.458736 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f70d890-772f-49eb-9c3b-0553bc2349ca-config\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.459368 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1828379a-4323-4161-881c-cf67367db9d4-config\") pod \"logging-loki-query-frontend-84558f7c9f-l69ln\" (UID: \"1828379a-4323-4161-881c-cf67367db9d4\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.459730 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f70d890-772f-49eb-9c3b-0553bc2349ca-config\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.459954 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f70d890-772f-49eb-9c3b-0553bc2349ca-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.460317 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1828379a-4323-4161-881c-cf67367db9d4-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-l69ln\" (UID: \"1828379a-4323-4161-881c-cf67367db9d4\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.462837 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/8f70d890-772f-49eb-9c3b-0553bc2349ca-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.464254 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8f70d890-772f-49eb-9c3b-0553bc2349ca-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.472902 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/1828379a-4323-4161-881c-cf67367db9d4-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-l69ln\" (UID: \"1828379a-4323-4161-881c-cf67367db9d4\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.473989 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8f70d890-772f-49eb-9c3b-0553bc2349ca-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.483543 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/1828379a-4323-4161-881c-cf67367db9d4-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-l69ln\" (UID: \"1828379a-4323-4161-881c-cf67367db9d4\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.494116 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh9q2\" (UniqueName: \"kubernetes.io/projected/8f70d890-772f-49eb-9c3b-0553bc2349ca-kube-api-access-fh9q2\") pod \"logging-loki-querier-5895d59bb8-9rscz\" (UID: \"8f70d890-772f-49eb-9c3b-0553bc2349ca\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.496317 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz4vb\" (UniqueName: \"kubernetes.io/projected/1828379a-4323-4161-881c-cf67367db9d4-kube-api-access-lz4vb\") pod \"logging-loki-query-frontend-84558f7c9f-l69ln\" (UID: \"1828379a-4323-4161-881c-cf67367db9d4\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.531994 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.559590 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.559632 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-tenants\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.559710 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/8435b802-65cf-46a0-89fa-fa55e43dfb68-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.559736 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.559761 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4lkl\" (UniqueName: \"kubernetes.io/projected/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-kube-api-access-m4lkl\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.559787 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-tls-secret\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.559803 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.559826 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/8435b802-65cf-46a0-89fa-fa55e43dfb68-rbac\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.559844 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/8435b802-65cf-46a0-89fa-fa55e43dfb68-lokistack-gateway\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.559862 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8435b802-65cf-46a0-89fa-fa55e43dfb68-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.559891 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/8435b802-65cf-46a0-89fa-fa55e43dfb68-tls-secret\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.559907 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jngwd\" (UniqueName: \"kubernetes.io/projected/8435b802-65cf-46a0-89fa-fa55e43dfb68-kube-api-access-jngwd\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.559933 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/8435b802-65cf-46a0-89fa-fa55e43dfb68-tenants\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.559960 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-rbac\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.559988 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-lokistack-gateway\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.560009 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8435b802-65cf-46a0-89fa-fa55e43dfb68-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.560972 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.561472 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-rbac\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.562106 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-lokistack-gateway\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.562534 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.565891 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-tls-secret\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.568294 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.569548 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-tenants\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.577450 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4lkl\" (UniqueName: \"kubernetes.io/projected/5e5bb18c-7c60-4ec3-ac94-e33904750bb8-kube-api-access-m4lkl\") pod \"logging-loki-gateway-767db5f6c6-qqqzr\" (UID: \"5e5bb18c-7c60-4ec3-ac94-e33904750bb8\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.631859 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.661725 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/8435b802-65cf-46a0-89fa-fa55e43dfb68-rbac\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.661765 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/8435b802-65cf-46a0-89fa-fa55e43dfb68-lokistack-gateway\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.661790 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8435b802-65cf-46a0-89fa-fa55e43dfb68-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.661824 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/8435b802-65cf-46a0-89fa-fa55e43dfb68-tls-secret\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.661841 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jngwd\" (UniqueName: \"kubernetes.io/projected/8435b802-65cf-46a0-89fa-fa55e43dfb68-kube-api-access-jngwd\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.661870 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/8435b802-65cf-46a0-89fa-fa55e43dfb68-tenants\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.661910 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8435b802-65cf-46a0-89fa-fa55e43dfb68-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.661937 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/8435b802-65cf-46a0-89fa-fa55e43dfb68-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.674186 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8435b802-65cf-46a0-89fa-fa55e43dfb68-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.674269 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8435b802-65cf-46a0-89fa-fa55e43dfb68-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.677474 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/8435b802-65cf-46a0-89fa-fa55e43dfb68-rbac\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.678070 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/8435b802-65cf-46a0-89fa-fa55e43dfb68-lokistack-gateway\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.685575 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/8435b802-65cf-46a0-89fa-fa55e43dfb68-tls-secret\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.686164 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/8435b802-65cf-46a0-89fa-fa55e43dfb68-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.691947 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/8435b802-65cf-46a0-89fa-fa55e43dfb68-tenants\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.707245 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jngwd\" (UniqueName: \"kubernetes.io/projected/8435b802-65cf-46a0-89fa-fa55e43dfb68-kube-api-access-jngwd\") pod \"logging-loki-gateway-767db5f6c6-tqz78\" (UID: \"8435b802-65cf-46a0-89fa-fa55e43dfb68\") " pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.744702 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7"] Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.806028 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.810262 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:31 crc kubenswrapper[4792]: I1127 17:22:31.905002 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-9rscz"] Nov 27 17:22:31 crc kubenswrapper[4792]: W1127 17:22:31.921931 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f70d890_772f_49eb_9c3b_0553bc2349ca.slice/crio-c3f105b6310544b4479d6c2641739834e4db10cd96a0ec1a525431130b10f25f WatchSource:0}: Error finding container c3f105b6310544b4479d6c2641739834e4db10cd96a0ec1a525431130b10f25f: Status 404 returned error can't find the container with id c3f105b6310544b4479d6c2641739834e4db10cd96a0ec1a525431130b10f25f Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.190760 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.191760 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.194158 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.194783 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.201007 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" event={"ID":"8f70d890-772f-49eb-9c3b-0553bc2349ca","Type":"ContainerStarted","Data":"c3f105b6310544b4479d6c2641739834e4db10cd96a0ec1a525431130b10f25f"} Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.203207 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" event={"ID":"b883f630-7c31-4a1a-9633-8770b40c5a69","Type":"ContainerStarted","Data":"371114874a8ee2cc5d1272bc855ca45d15c80e932ababc21d0a3dff7190e5ba7"} Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.206816 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.269052 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.269846 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a318073-842f-45ff-b6df-bc0abc0d576b-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.269888 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-93a7ef70-0e22-4932-86a1-98083af3a576\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93a7ef70-0e22-4932-86a1-98083af3a576\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.269910 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2a318073-842f-45ff-b6df-bc0abc0d576b-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.269942 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/2a318073-842f-45ff-b6df-bc0abc0d576b-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.269963 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/2a318073-842f-45ff-b6df-bc0abc0d576b-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.270007 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a318073-842f-45ff-b6df-bc0abc0d576b-config\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.270252 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mclnj\" (UniqueName: \"kubernetes.io/projected/2a318073-842f-45ff-b6df-bc0abc0d576b-kube-api-access-mclnj\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.270293 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7f63d794-a106-4518-9ea6-93a87ef3955c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f63d794-a106-4518-9ea6-93a87ef3955c\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.270594 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.272366 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.272588 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.280486 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln"] Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.285933 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.353799 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr"] Nov 27 17:22:32 crc kubenswrapper[4792]: W1127 17:22:32.356375 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e5bb18c_7c60_4ec3_ac94_e33904750bb8.slice/crio-d58e142b9bff91f8615fe479cf4b6f5658cda4a4d59aef8e6f5987dcbffa0988 WatchSource:0}: Error finding container d58e142b9bff91f8615fe479cf4b6f5658cda4a4d59aef8e6f5987dcbffa0988: Status 404 returned error can't find the container with id d58e142b9bff91f8615fe479cf4b6f5658cda4a4d59aef8e6f5987dcbffa0988 Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.370918 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.371638 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/2a318073-842f-45ff-b6df-bc0abc0d576b-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.371807 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a9851c2-362b-425e-adf3-5056cbbfb169-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.371842 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/6a9851c2-362b-425e-adf3-5056cbbfb169-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.371873 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a318073-842f-45ff-b6df-bc0abc0d576b-config\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.371921 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aa924acf-3751-48f8-8d69-ce709adfe322\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa924acf-3751-48f8-8d69-ce709adfe322\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.371966 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mclnj\" (UniqueName: \"kubernetes.io/projected/2a318073-842f-45ff-b6df-bc0abc0d576b-kube-api-access-mclnj\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.372002 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7f63d794-a106-4518-9ea6-93a87ef3955c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f63d794-a106-4518-9ea6-93a87ef3955c\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.372029 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/6a9851c2-362b-425e-adf3-5056cbbfb169-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.372055 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6a9851c2-362b-425e-adf3-5056cbbfb169-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.372080 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a318073-842f-45ff-b6df-bc0abc0d576b-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.372104 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpv6r\" (UniqueName: \"kubernetes.io/projected/6a9851c2-362b-425e-adf3-5056cbbfb169-kube-api-access-cpv6r\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.372129 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a9851c2-362b-425e-adf3-5056cbbfb169-config\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.372154 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-93a7ef70-0e22-4932-86a1-98083af3a576\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93a7ef70-0e22-4932-86a1-98083af3a576\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.372175 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2a318073-842f-45ff-b6df-bc0abc0d576b-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.372210 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/2a318073-842f-45ff-b6df-bc0abc0d576b-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.372594 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.373634 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a318073-842f-45ff-b6df-bc0abc0d576b-config\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.375732 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a318073-842f-45ff-b6df-bc0abc0d576b-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.378324 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.379392 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.379449 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-93a7ef70-0e22-4932-86a1-98083af3a576\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93a7ef70-0e22-4932-86a1-98083af3a576\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a90cb76cff8e477d71b9f96a7f9936b1d6ed75b7b20eb0f5aa13783ad34dbc69/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.379392 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.379527 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7f63d794-a106-4518-9ea6-93a87ef3955c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f63d794-a106-4518-9ea6-93a87ef3955c\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/636a170ceeb916db502a8f374c0dc70577353f704e1c7aea028d758c4af22557/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.381578 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/2a318073-842f-45ff-b6df-bc0abc0d576b-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.387425 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.387470 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/2a318073-842f-45ff-b6df-bc0abc0d576b-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.387975 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.388289 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/2a318073-842f-45ff-b6df-bc0abc0d576b-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.392602 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mclnj\" (UniqueName: \"kubernetes.io/projected/2a318073-842f-45ff-b6df-bc0abc0d576b-kube-api-access-mclnj\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.401890 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-767db5f6c6-tqz78"] Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.422082 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-93a7ef70-0e22-4932-86a1-98083af3a576\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-93a7ef70-0e22-4932-86a1-98083af3a576\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.438328 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7f63d794-a106-4518-9ea6-93a87ef3955c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f63d794-a106-4518-9ea6-93a87ef3955c\") pod \"logging-loki-ingester-0\" (UID: \"2a318073-842f-45ff-b6df-bc0abc0d576b\") " pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.473235 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a9851c2-362b-425e-adf3-5056cbbfb169-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.473308 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/6a9851c2-362b-425e-adf3-5056cbbfb169-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.473339 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.473367 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-aa924acf-3751-48f8-8d69-ce709adfe322\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa924acf-3751-48f8-8d69-ce709adfe322\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.473549 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0eb49fb3-87eb-429d-bdc8-808a8c11996f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eb49fb3-87eb-429d-bdc8-808a8c11996f\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.473589 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpv6r\" (UniqueName: \"kubernetes.io/projected/6a9851c2-362b-425e-adf3-5056cbbfb169-kube-api-access-cpv6r\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.473620 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a9851c2-362b-425e-adf3-5056cbbfb169-config\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.473770 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.473860 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.474043 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4lws\" (UniqueName: \"kubernetes.io/projected/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-kube-api-access-k4lws\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.474077 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.474103 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/6a9851c2-362b-425e-adf3-5056cbbfb169-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.474129 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6a9851c2-362b-425e-adf3-5056cbbfb169-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.474147 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-config\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.474465 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a9851c2-362b-425e-adf3-5056cbbfb169-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.474776 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a9851c2-362b-425e-adf3-5056cbbfb169-config\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.476417 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.476450 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-aa924acf-3751-48f8-8d69-ce709adfe322\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa924acf-3751-48f8-8d69-ce709adfe322\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fe9341a22e4f96bb58ae278d4d68fe87797cc435773993efc7532e105e6aa268/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.477615 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/6a9851c2-362b-425e-adf3-5056cbbfb169-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.478312 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6a9851c2-362b-425e-adf3-5056cbbfb169-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.480377 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/6a9851c2-362b-425e-adf3-5056cbbfb169-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.489957 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpv6r\" (UniqueName: \"kubernetes.io/projected/6a9851c2-362b-425e-adf3-5056cbbfb169-kube-api-access-cpv6r\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.498555 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-aa924acf-3751-48f8-8d69-ce709adfe322\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa924acf-3751-48f8-8d69-ce709adfe322\") pod \"logging-loki-compactor-0\" (UID: \"6a9851c2-362b-425e-adf3-5056cbbfb169\") " pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.511206 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.579618 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0eb49fb3-87eb-429d-bdc8-808a8c11996f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eb49fb3-87eb-429d-bdc8-808a8c11996f\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.579737 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.579787 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.579816 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4lws\" (UniqueName: \"kubernetes.io/projected/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-kube-api-access-k4lws\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.579847 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.579879 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-config\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.579942 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.580825 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.582865 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-config\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.585513 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.585961 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.586738 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.587443 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0eb49fb3-87eb-429d-bdc8-808a8c11996f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eb49fb3-87eb-429d-bdc8-808a8c11996f\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/170aa28aa71965727bc8c44959e00a62682b35665c4542e359514541a9aa8ea4/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.587480 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.604356 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4lws\" (UniqueName: \"kubernetes.io/projected/3b4c2851-5058-4cfc-9efa-a5d94e7e8090-kube-api-access-k4lws\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.614118 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0eb49fb3-87eb-429d-bdc8-808a8c11996f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eb49fb3-87eb-429d-bdc8-808a8c11996f\") pod \"logging-loki-index-gateway-0\" (UID: \"3b4c2851-5058-4cfc-9efa-a5d94e7e8090\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.738553 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.749786 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:32 crc kubenswrapper[4792]: I1127 17:22:32.918828 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 27 17:22:33 crc kubenswrapper[4792]: I1127 17:22:33.194920 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 27 17:22:33 crc kubenswrapper[4792]: W1127 17:22:33.200413 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a9851c2_362b_425e_adf3_5056cbbfb169.slice/crio-8b1bc277957c0cce89caddcb330a112587635e283ab324a65eae7cff1c23aad3 WatchSource:0}: Error finding container 8b1bc277957c0cce89caddcb330a112587635e283ab324a65eae7cff1c23aad3: Status 404 returned error can't find the container with id 8b1bc277957c0cce89caddcb330a112587635e283ab324a65eae7cff1c23aad3 Nov 27 17:22:33 crc kubenswrapper[4792]: I1127 17:22:33.212656 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" event={"ID":"8435b802-65cf-46a0-89fa-fa55e43dfb68","Type":"ContainerStarted","Data":"0be117c055fdb0566288fbcd93de19ee8522e245086f1a10c9081b19e208611a"} Nov 27 17:22:33 crc kubenswrapper[4792]: I1127 17:22:33.215285 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" event={"ID":"1828379a-4323-4161-881c-cf67367db9d4","Type":"ContainerStarted","Data":"6b685a29b84d208978c643907a7e39461ba9b628aca539d0135c99a9303901d0"} Nov 27 17:22:33 crc kubenswrapper[4792]: I1127 17:22:33.216512 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" event={"ID":"5e5bb18c-7c60-4ec3-ac94-e33904750bb8","Type":"ContainerStarted","Data":"d58e142b9bff91f8615fe479cf4b6f5658cda4a4d59aef8e6f5987dcbffa0988"} Nov 27 17:22:33 crc kubenswrapper[4792]: I1127 17:22:33.217779 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"2a318073-842f-45ff-b6df-bc0abc0d576b","Type":"ContainerStarted","Data":"54519c6ef5afdb281eb268acfe5d0bab9ecf95c1f8be08a65866fbbded86e97f"} Nov 27 17:22:33 crc kubenswrapper[4792]: I1127 17:22:33.233262 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 27 17:22:33 crc kubenswrapper[4792]: W1127 17:22:33.238833 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b4c2851_5058_4cfc_9efa_a5d94e7e8090.slice/crio-bb487be2a6f0ed4d1d24b3effa326705407ec6327eaed0725da5ba64acec416c WatchSource:0}: Error finding container bb487be2a6f0ed4d1d24b3effa326705407ec6327eaed0725da5ba64acec416c: Status 404 returned error can't find the container with id bb487be2a6f0ed4d1d24b3effa326705407ec6327eaed0725da5ba64acec416c Nov 27 17:22:34 crc kubenswrapper[4792]: I1127 17:22:34.229792 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"3b4c2851-5058-4cfc-9efa-a5d94e7e8090","Type":"ContainerStarted","Data":"bb487be2a6f0ed4d1d24b3effa326705407ec6327eaed0725da5ba64acec416c"} Nov 27 17:22:34 crc kubenswrapper[4792]: I1127 17:22:34.230876 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"6a9851c2-362b-425e-adf3-5056cbbfb169","Type":"ContainerStarted","Data":"8b1bc277957c0cce89caddcb330a112587635e283ab324a65eae7cff1c23aad3"} Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.243707 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" event={"ID":"5e5bb18c-7c60-4ec3-ac94-e33904750bb8","Type":"ContainerStarted","Data":"aba64dcff421eae2d951109a37cb9efd9c6201963c820b93e2f2a0cf697cf679"} Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.245001 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"6a9851c2-362b-425e-adf3-5056cbbfb169","Type":"ContainerStarted","Data":"5dc373697dfdc05285f348f18c70179b60e1a63c9f94620342dabade3cc21646"} Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.245175 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.247238 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" event={"ID":"8f70d890-772f-49eb-9c3b-0553bc2349ca","Type":"ContainerStarted","Data":"8c14771d3830baf4d3c356d2d1c2638b2985c37476ac1c71a300bae75e34494c"} Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.247875 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.249199 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" event={"ID":"b883f630-7c31-4a1a-9633-8770b40c5a69","Type":"ContainerStarted","Data":"df1536f6b819321679d09d46490f87719220b98088c1b9628b576819ecbca015"} Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.249331 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.250533 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"2a318073-842f-45ff-b6df-bc0abc0d576b","Type":"ContainerStarted","Data":"ce25533cd36876835e6b139581bbdf6f5a9faad201ba50cc3da551db8a0f161f"} Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.250786 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.251975 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"3b4c2851-5058-4cfc-9efa-a5d94e7e8090","Type":"ContainerStarted","Data":"1357a315cecef1e16e0a8dcafaa0bf4864ed7d9b57f0049d5b827ec61c8b9c2f"} Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.252042 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.253363 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" event={"ID":"8435b802-65cf-46a0-89fa-fa55e43dfb68","Type":"ContainerStarted","Data":"31c215f3652cb8d11ae0dd7c7d740ed21b1c8c0f42848940d38849b8ebd67e11"} Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.255430 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" event={"ID":"1828379a-4323-4161-881c-cf67367db9d4","Type":"ContainerStarted","Data":"26c4afe1aad7f8202a25d273d80accab052efae1e30b44a515334e52ab4968d8"} Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.255641 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.294883 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=2.973831783 podStartE2EDuration="5.294861647s" podCreationTimestamp="2025-11-27 17:22:31 +0000 UTC" firstStartedPulling="2025-11-27 17:22:33.209957248 +0000 UTC m=+775.552783566" lastFinishedPulling="2025-11-27 17:22:35.530987112 +0000 UTC m=+777.873813430" observedRunningTime="2025-11-27 17:22:36.280054755 +0000 UTC m=+778.622881073" watchObservedRunningTime="2025-11-27 17:22:36.294861647 +0000 UTC m=+778.637687975" Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.302312 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=2.700131656 podStartE2EDuration="5.302287884s" podCreationTimestamp="2025-11-27 17:22:31 +0000 UTC" firstStartedPulling="2025-11-27 17:22:32.929206123 +0000 UTC m=+775.272032441" lastFinishedPulling="2025-11-27 17:22:35.531362351 +0000 UTC m=+777.874188669" observedRunningTime="2025-11-27 17:22:36.293371469 +0000 UTC m=+778.636197787" watchObservedRunningTime="2025-11-27 17:22:36.302287884 +0000 UTC m=+778.645114212" Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.316443 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" podStartSLOduration=1.580356077 podStartE2EDuration="5.316427179s" podCreationTimestamp="2025-11-27 17:22:31 +0000 UTC" firstStartedPulling="2025-11-27 17:22:31.793868823 +0000 UTC m=+774.136695141" lastFinishedPulling="2025-11-27 17:22:35.529939915 +0000 UTC m=+777.872766243" observedRunningTime="2025-11-27 17:22:36.310181042 +0000 UTC m=+778.653007380" watchObservedRunningTime="2025-11-27 17:22:36.316427179 +0000 UTC m=+778.659253497" Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.333595 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" podStartSLOduration=2.171162363 podStartE2EDuration="5.33357352s" podCreationTimestamp="2025-11-27 17:22:31 +0000 UTC" firstStartedPulling="2025-11-27 17:22:32.304827133 +0000 UTC m=+774.647653451" lastFinishedPulling="2025-11-27 17:22:35.46723829 +0000 UTC m=+777.810064608" observedRunningTime="2025-11-27 17:22:36.326501582 +0000 UTC m=+778.669327900" watchObservedRunningTime="2025-11-27 17:22:36.33357352 +0000 UTC m=+778.676399838" Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.352040 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.062557312 podStartE2EDuration="5.352018563s" podCreationTimestamp="2025-11-27 17:22:31 +0000 UTC" firstStartedPulling="2025-11-27 17:22:33.24108344 +0000 UTC m=+775.583909758" lastFinishedPulling="2025-11-27 17:22:35.530544691 +0000 UTC m=+777.873371009" observedRunningTime="2025-11-27 17:22:36.344256638 +0000 UTC m=+778.687082956" watchObservedRunningTime="2025-11-27 17:22:36.352018563 +0000 UTC m=+778.694844881" Nov 27 17:22:36 crc kubenswrapper[4792]: I1127 17:22:36.373808 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" podStartSLOduration=1.78983099 podStartE2EDuration="5.37378877s" podCreationTimestamp="2025-11-27 17:22:31 +0000 UTC" firstStartedPulling="2025-11-27 17:22:31.945235857 +0000 UTC m=+774.288062185" lastFinishedPulling="2025-11-27 17:22:35.529193647 +0000 UTC m=+777.872019965" observedRunningTime="2025-11-27 17:22:36.371314418 +0000 UTC m=+778.714140726" watchObservedRunningTime="2025-11-27 17:22:36.37378877 +0000 UTC m=+778.716615088" Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.273914 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" event={"ID":"8435b802-65cf-46a0-89fa-fa55e43dfb68","Type":"ContainerStarted","Data":"833c82c2854c36d32ab05b9038b6de4e76fe99c4ccd5ad0161893bd9915d9672"} Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.274466 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.274485 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.276985 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" event={"ID":"5e5bb18c-7c60-4ec3-ac94-e33904750bb8","Type":"ContainerStarted","Data":"9380fd6cc850971e57485845dacd326405564d6ac4961904a152123dad481bdd"} Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.278233 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.278258 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.283618 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.288011 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.288073 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.288540 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.289764 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.289806 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.289844 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.290475 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f01bf94bd55fb4aa5577fea4f28f3b654e0b34834b1e5c5ebc907510f5b8133"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.290542 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://9f01bf94bd55fb4aa5577fea4f28f3b654e0b34834b1e5c5ebc907510f5b8133" gracePeriod=600 Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.306605 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-767db5f6c6-tqz78" podStartSLOduration=1.66489048 podStartE2EDuration="7.306587792s" podCreationTimestamp="2025-11-27 17:22:31 +0000 UTC" firstStartedPulling="2025-11-27 17:22:32.431319492 +0000 UTC m=+774.774145810" lastFinishedPulling="2025-11-27 17:22:38.073016804 +0000 UTC m=+780.415843122" observedRunningTime="2025-11-27 17:22:38.306166281 +0000 UTC m=+780.648992609" watchObservedRunningTime="2025-11-27 17:22:38.306587792 +0000 UTC m=+780.649414110" Nov 27 17:22:38 crc kubenswrapper[4792]: I1127 17:22:38.343377 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-767db5f6c6-qqqzr" podStartSLOduration=1.623528827 podStartE2EDuration="7.343360762s" podCreationTimestamp="2025-11-27 17:22:31 +0000 UTC" firstStartedPulling="2025-11-27 17:22:32.358687797 +0000 UTC m=+774.701514115" lastFinishedPulling="2025-11-27 17:22:38.078519732 +0000 UTC m=+780.421346050" observedRunningTime="2025-11-27 17:22:38.340158532 +0000 UTC m=+780.682984840" watchObservedRunningTime="2025-11-27 17:22:38.343360762 +0000 UTC m=+780.686187080" Nov 27 17:22:39 crc kubenswrapper[4792]: I1127 17:22:39.285279 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="9f01bf94bd55fb4aa5577fea4f28f3b654e0b34834b1e5c5ebc907510f5b8133" exitCode=0 Nov 27 17:22:39 crc kubenswrapper[4792]: I1127 17:22:39.285330 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"9f01bf94bd55fb4aa5577fea4f28f3b654e0b34834b1e5c5ebc907510f5b8133"} Nov 27 17:22:39 crc kubenswrapper[4792]: I1127 17:22:39.285875 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"eb4dfc187c50b610e23a24ccd114a91f4e733187652bfdcb858d9943f47d0623"} Nov 27 17:22:39 crc kubenswrapper[4792]: I1127 17:22:39.285909 4792 scope.go:117] "RemoveContainer" containerID="e077930b952b5bb442db4d4d9a23e8530f27542b022a402bd9965e40a6267099" Nov 27 17:22:51 crc kubenswrapper[4792]: I1127 17:22:51.398141 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-dgnv7" Nov 27 17:22:51 crc kubenswrapper[4792]: I1127 17:22:51.541748 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-9rscz" Nov 27 17:22:51 crc kubenswrapper[4792]: I1127 17:22:51.639537 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-l69ln" Nov 27 17:22:52 crc kubenswrapper[4792]: I1127 17:22:52.518994 4792 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 27 17:22:52 crc kubenswrapper[4792]: I1127 17:22:52.519378 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="2a318073-842f-45ff-b6df-bc0abc0d576b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 27 17:22:52 crc kubenswrapper[4792]: I1127 17:22:52.745907 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Nov 27 17:22:52 crc kubenswrapper[4792]: I1127 17:22:52.754373 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Nov 27 17:23:02 crc kubenswrapper[4792]: I1127 17:23:02.517566 4792 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 27 17:23:02 crc kubenswrapper[4792]: I1127 17:23:02.518956 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="2a318073-842f-45ff-b6df-bc0abc0d576b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 27 17:23:12 crc kubenswrapper[4792]: I1127 17:23:12.519469 4792 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 27 17:23:12 crc kubenswrapper[4792]: I1127 17:23:12.520289 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="2a318073-842f-45ff-b6df-bc0abc0d576b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 27 17:23:22 crc kubenswrapper[4792]: I1127 17:23:22.515623 4792 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 27 17:23:22 crc kubenswrapper[4792]: I1127 17:23:22.516106 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="2a318073-842f-45ff-b6df-bc0abc0d576b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 27 17:23:32 crc kubenswrapper[4792]: I1127 17:23:32.520719 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.388421 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-hh45h"] Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.389859 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.392214 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.392424 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.392709 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.394420 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-4g92r" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.400155 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.414903 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.431547 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-hh45h"] Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.471450 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-hh45h"] Nov 27 17:23:52 crc kubenswrapper[4792]: E1127 17:23:52.474278 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-ftpkw metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-ftpkw metrics sa-token tmp trusted-ca]: context canceled" pod="openshift-logging/collector-hh45h" podUID="071c5424-6e10-4ce4-befc-c4ce504ac179" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.519197 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-trusted-ca\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.519240 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-entrypoint\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.519259 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/071c5424-6e10-4ce4-befc-c4ce504ac179-datadir\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.519294 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-config-openshift-service-cacrt\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.519347 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-metrics\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.519380 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/071c5424-6e10-4ce4-befc-c4ce504ac179-tmp\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.519396 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-collector-token\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.519432 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-collector-syslog-receiver\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.519447 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftpkw\" (UniqueName: \"kubernetes.io/projected/071c5424-6e10-4ce4-befc-c4ce504ac179-kube-api-access-ftpkw\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.519471 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/071c5424-6e10-4ce4-befc-c4ce504ac179-sa-token\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.519488 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-config\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.621284 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-trusted-ca\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.621342 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-entrypoint\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.621370 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/071c5424-6e10-4ce4-befc-c4ce504ac179-datadir\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.621412 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-config-openshift-service-cacrt\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.621489 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-metrics\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.621534 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/071c5424-6e10-4ce4-befc-c4ce504ac179-tmp\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.621553 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-collector-token\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.621577 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-collector-syslog-receiver\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.621600 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftpkw\" (UniqueName: \"kubernetes.io/projected/071c5424-6e10-4ce4-befc-c4ce504ac179-kube-api-access-ftpkw\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.621635 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/071c5424-6e10-4ce4-befc-c4ce504ac179-sa-token\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.622628 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-config\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.623384 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-trusted-ca\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.624055 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-entrypoint\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.624104 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/071c5424-6e10-4ce4-befc-c4ce504ac179-datadir\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.621679 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-config\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.624684 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-config-openshift-service-cacrt\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: E1127 17:23:52.625685 4792 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Nov 27 17:23:52 crc kubenswrapper[4792]: E1127 17:23:52.625748 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-collector-syslog-receiver podName:071c5424-6e10-4ce4-befc-c4ce504ac179 nodeName:}" failed. No retries permitted until 2025-11-27 17:23:53.125729686 +0000 UTC m=+855.468556004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-collector-syslog-receiver") pod "collector-hh45h" (UID: "071c5424-6e10-4ce4-befc-c4ce504ac179") : secret "collector-syslog-receiver" not found Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.630862 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-metrics\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.633095 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-collector-token\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.637304 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/071c5424-6e10-4ce4-befc-c4ce504ac179-tmp\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.644263 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftpkw\" (UniqueName: \"kubernetes.io/projected/071c5424-6e10-4ce4-befc-c4ce504ac179-kube-api-access-ftpkw\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.644357 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/071c5424-6e10-4ce4-befc-c4ce504ac179-sa-token\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.845825 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.854802 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-hh45h" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.927929 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-collector-token\") pod \"071c5424-6e10-4ce4-befc-c4ce504ac179\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.928489 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/071c5424-6e10-4ce4-befc-c4ce504ac179-tmp\") pod \"071c5424-6e10-4ce4-befc-c4ce504ac179\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.928605 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-config\") pod \"071c5424-6e10-4ce4-befc-c4ce504ac179\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.928745 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-entrypoint\") pod \"071c5424-6e10-4ce4-befc-c4ce504ac179\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.928824 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-config-openshift-service-cacrt\") pod \"071c5424-6e10-4ce4-befc-c4ce504ac179\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.928972 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/071c5424-6e10-4ce4-befc-c4ce504ac179-datadir\") pod \"071c5424-6e10-4ce4-befc-c4ce504ac179\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.929019 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/071c5424-6e10-4ce4-befc-c4ce504ac179-datadir" (OuterVolumeSpecName: "datadir") pod "071c5424-6e10-4ce4-befc-c4ce504ac179" (UID: "071c5424-6e10-4ce4-befc-c4ce504ac179"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.929069 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftpkw\" (UniqueName: \"kubernetes.io/projected/071c5424-6e10-4ce4-befc-c4ce504ac179-kube-api-access-ftpkw\") pod \"071c5424-6e10-4ce4-befc-c4ce504ac179\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.929161 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-metrics\") pod \"071c5424-6e10-4ce4-befc-c4ce504ac179\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.929215 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/071c5424-6e10-4ce4-befc-c4ce504ac179-sa-token\") pod \"071c5424-6e10-4ce4-befc-c4ce504ac179\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.929359 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-trusted-ca\") pod \"071c5424-6e10-4ce4-befc-c4ce504ac179\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.929977 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-config" (OuterVolumeSpecName: "config") pod "071c5424-6e10-4ce4-befc-c4ce504ac179" (UID: "071c5424-6e10-4ce4-befc-c4ce504ac179"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.930121 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "071c5424-6e10-4ce4-befc-c4ce504ac179" (UID: "071c5424-6e10-4ce4-befc-c4ce504ac179"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.930150 4792 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/071c5424-6e10-4ce4-befc-c4ce504ac179-datadir\") on node \"crc\" DevicePath \"\"" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.930424 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "071c5424-6e10-4ce4-befc-c4ce504ac179" (UID: "071c5424-6e10-4ce4-befc-c4ce504ac179"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.930504 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "071c5424-6e10-4ce4-befc-c4ce504ac179" (UID: "071c5424-6e10-4ce4-befc-c4ce504ac179"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.933550 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/071c5424-6e10-4ce4-befc-c4ce504ac179-sa-token" (OuterVolumeSpecName: "sa-token") pod "071c5424-6e10-4ce4-befc-c4ce504ac179" (UID: "071c5424-6e10-4ce4-befc-c4ce504ac179"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.933629 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-metrics" (OuterVolumeSpecName: "metrics") pod "071c5424-6e10-4ce4-befc-c4ce504ac179" (UID: "071c5424-6e10-4ce4-befc-c4ce504ac179"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.934171 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-collector-token" (OuterVolumeSpecName: "collector-token") pod "071c5424-6e10-4ce4-befc-c4ce504ac179" (UID: "071c5424-6e10-4ce4-befc-c4ce504ac179"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.934872 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/071c5424-6e10-4ce4-befc-c4ce504ac179-tmp" (OuterVolumeSpecName: "tmp") pod "071c5424-6e10-4ce4-befc-c4ce504ac179" (UID: "071c5424-6e10-4ce4-befc-c4ce504ac179"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:23:52 crc kubenswrapper[4792]: I1127 17:23:52.940736 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/071c5424-6e10-4ce4-befc-c4ce504ac179-kube-api-access-ftpkw" (OuterVolumeSpecName: "kube-api-access-ftpkw") pod "071c5424-6e10-4ce4-befc-c4ce504ac179" (UID: "071c5424-6e10-4ce4-befc-c4ce504ac179"). InnerVolumeSpecName "kube-api-access-ftpkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.031346 4792 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-collector-token\") on node \"crc\" DevicePath \"\"" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.031388 4792 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/071c5424-6e10-4ce4-befc-c4ce504ac179-tmp\") on node \"crc\" DevicePath \"\"" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.031402 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.031415 4792 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-entrypoint\") on node \"crc\" DevicePath \"\"" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.031429 4792 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.031444 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftpkw\" (UniqueName: \"kubernetes.io/projected/071c5424-6e10-4ce4-befc-c4ce504ac179-kube-api-access-ftpkw\") on node \"crc\" DevicePath \"\"" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.031459 4792 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-metrics\") on node \"crc\" DevicePath \"\"" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.031471 4792 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/071c5424-6e10-4ce4-befc-c4ce504ac179-sa-token\") on node \"crc\" DevicePath \"\"" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.031483 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/071c5424-6e10-4ce4-befc-c4ce504ac179-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.132965 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-collector-syslog-receiver\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.138138 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-collector-syslog-receiver\") pod \"collector-hh45h\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " pod="openshift-logging/collector-hh45h" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.234903 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-collector-syslog-receiver\") pod \"071c5424-6e10-4ce4-befc-c4ce504ac179\" (UID: \"071c5424-6e10-4ce4-befc-c4ce504ac179\") " Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.238225 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "071c5424-6e10-4ce4-befc-c4ce504ac179" (UID: "071c5424-6e10-4ce4-befc-c4ce504ac179"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.336402 4792 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/071c5424-6e10-4ce4-befc-c4ce504ac179-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.854461 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-hh45h" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.931740 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-hh45h"] Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.941324 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-hh45h"] Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.951088 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-kv4ll"] Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.958121 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-kv4ll" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.965746 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-kv4ll"] Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.965983 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-4g92r" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.967402 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.967557 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.967762 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.967882 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 27 17:23:53 crc kubenswrapper[4792]: I1127 17:23:53.976872 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.049305 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/b495d78f-2e10-4171-88ba-2ddb90195710-datadir\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.049463 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/b495d78f-2e10-4171-88ba-2ddb90195710-config-openshift-service-cacrt\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.049559 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b495d78f-2e10-4171-88ba-2ddb90195710-tmp\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.049636 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/b495d78f-2e10-4171-88ba-2ddb90195710-entrypoint\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.049867 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/b495d78f-2e10-4171-88ba-2ddb90195710-sa-token\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.049940 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7rft\" (UniqueName: \"kubernetes.io/projected/b495d78f-2e10-4171-88ba-2ddb90195710-kube-api-access-n7rft\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.050024 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b495d78f-2e10-4171-88ba-2ddb90195710-config\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.050107 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/b495d78f-2e10-4171-88ba-2ddb90195710-metrics\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.050247 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/b495d78f-2e10-4171-88ba-2ddb90195710-collector-syslog-receiver\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.050330 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b495d78f-2e10-4171-88ba-2ddb90195710-trusted-ca\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.050385 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/b495d78f-2e10-4171-88ba-2ddb90195710-collector-token\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.152255 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/b495d78f-2e10-4171-88ba-2ddb90195710-sa-token\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.152308 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7rft\" (UniqueName: \"kubernetes.io/projected/b495d78f-2e10-4171-88ba-2ddb90195710-kube-api-access-n7rft\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.152335 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b495d78f-2e10-4171-88ba-2ddb90195710-config\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.152358 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/b495d78f-2e10-4171-88ba-2ddb90195710-metrics\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.152397 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/b495d78f-2e10-4171-88ba-2ddb90195710-collector-syslog-receiver\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.152417 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b495d78f-2e10-4171-88ba-2ddb90195710-trusted-ca\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.152437 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/b495d78f-2e10-4171-88ba-2ddb90195710-collector-token\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.152461 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/b495d78f-2e10-4171-88ba-2ddb90195710-datadir\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.152494 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/b495d78f-2e10-4171-88ba-2ddb90195710-config-openshift-service-cacrt\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.152514 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b495d78f-2e10-4171-88ba-2ddb90195710-tmp\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.152536 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/b495d78f-2e10-4171-88ba-2ddb90195710-entrypoint\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.153026 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/b495d78f-2e10-4171-88ba-2ddb90195710-datadir\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.153452 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/b495d78f-2e10-4171-88ba-2ddb90195710-entrypoint\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.153921 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/b495d78f-2e10-4171-88ba-2ddb90195710-config-openshift-service-cacrt\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.154135 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b495d78f-2e10-4171-88ba-2ddb90195710-config\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.155086 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b495d78f-2e10-4171-88ba-2ddb90195710-trusted-ca\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.157847 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/b495d78f-2e10-4171-88ba-2ddb90195710-collector-syslog-receiver\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.160071 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/b495d78f-2e10-4171-88ba-2ddb90195710-metrics\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.160772 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/b495d78f-2e10-4171-88ba-2ddb90195710-collector-token\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.161156 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b495d78f-2e10-4171-88ba-2ddb90195710-tmp\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.168643 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7rft\" (UniqueName: \"kubernetes.io/projected/b495d78f-2e10-4171-88ba-2ddb90195710-kube-api-access-n7rft\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.179427 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/b495d78f-2e10-4171-88ba-2ddb90195710-sa-token\") pod \"collector-kv4ll\" (UID: \"b495d78f-2e10-4171-88ba-2ddb90195710\") " pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.293249 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-kv4ll" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.703026 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="071c5424-6e10-4ce4-befc-c4ce504ac179" path="/var/lib/kubelet/pods/071c5424-6e10-4ce4-befc-c4ce504ac179/volumes" Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.814584 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-kv4ll"] Nov 27 17:23:54 crc kubenswrapper[4792]: I1127 17:23:54.862897 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-kv4ll" event={"ID":"b495d78f-2e10-4171-88ba-2ddb90195710","Type":"ContainerStarted","Data":"dbeed7ad3fc61de1c360e710c3bc5c842edd00bdad62773466b55b779ef9a305"} Nov 27 17:24:02 crc kubenswrapper[4792]: I1127 17:24:02.918355 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-kv4ll" event={"ID":"b495d78f-2e10-4171-88ba-2ddb90195710","Type":"ContainerStarted","Data":"cce10f15199bda4bc515e10f02cca024af328b5b3c038ca8ca8b84c4fdb4a50b"} Nov 27 17:24:02 crc kubenswrapper[4792]: I1127 17:24:02.944419 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-kv4ll" podStartSLOduration=2.59309972 podStartE2EDuration="9.94439605s" podCreationTimestamp="2025-11-27 17:23:53 +0000 UTC" firstStartedPulling="2025-11-27 17:23:54.825722718 +0000 UTC m=+857.168549036" lastFinishedPulling="2025-11-27 17:24:02.177019038 +0000 UTC m=+864.519845366" observedRunningTime="2025-11-27 17:24:02.941553559 +0000 UTC m=+865.284379897" watchObservedRunningTime="2025-11-27 17:24:02.94439605 +0000 UTC m=+865.287222378" Nov 27 17:24:32 crc kubenswrapper[4792]: I1127 17:24:32.647968 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8"] Nov 27 17:24:32 crc kubenswrapper[4792]: I1127 17:24:32.650383 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" Nov 27 17:24:32 crc kubenswrapper[4792]: I1127 17:24:32.653693 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 27 17:24:32 crc kubenswrapper[4792]: I1127 17:24:32.659059 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8"] Nov 27 17:24:32 crc kubenswrapper[4792]: I1127 17:24:32.716389 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/702456d4-256d-4792-bfb7-0389c6aa9726-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8\" (UID: \"702456d4-256d-4792-bfb7-0389c6aa9726\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" Nov 27 17:24:32 crc kubenswrapper[4792]: I1127 17:24:32.716449 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srrfc\" (UniqueName: \"kubernetes.io/projected/702456d4-256d-4792-bfb7-0389c6aa9726-kube-api-access-srrfc\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8\" (UID: \"702456d4-256d-4792-bfb7-0389c6aa9726\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" Nov 27 17:24:32 crc kubenswrapper[4792]: I1127 17:24:32.716563 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/702456d4-256d-4792-bfb7-0389c6aa9726-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8\" (UID: \"702456d4-256d-4792-bfb7-0389c6aa9726\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" Nov 27 17:24:32 crc kubenswrapper[4792]: I1127 17:24:32.818131 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/702456d4-256d-4792-bfb7-0389c6aa9726-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8\" (UID: \"702456d4-256d-4792-bfb7-0389c6aa9726\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" Nov 27 17:24:32 crc kubenswrapper[4792]: I1127 17:24:32.818258 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/702456d4-256d-4792-bfb7-0389c6aa9726-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8\" (UID: \"702456d4-256d-4792-bfb7-0389c6aa9726\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" Nov 27 17:24:32 crc kubenswrapper[4792]: I1127 17:24:32.818292 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srrfc\" (UniqueName: \"kubernetes.io/projected/702456d4-256d-4792-bfb7-0389c6aa9726-kube-api-access-srrfc\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8\" (UID: \"702456d4-256d-4792-bfb7-0389c6aa9726\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" Nov 27 17:24:32 crc kubenswrapper[4792]: I1127 17:24:32.819291 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/702456d4-256d-4792-bfb7-0389c6aa9726-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8\" (UID: \"702456d4-256d-4792-bfb7-0389c6aa9726\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" Nov 27 17:24:32 crc kubenswrapper[4792]: I1127 17:24:32.819816 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/702456d4-256d-4792-bfb7-0389c6aa9726-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8\" (UID: \"702456d4-256d-4792-bfb7-0389c6aa9726\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" Nov 27 17:24:32 crc kubenswrapper[4792]: I1127 17:24:32.848982 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srrfc\" (UniqueName: \"kubernetes.io/projected/702456d4-256d-4792-bfb7-0389c6aa9726-kube-api-access-srrfc\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8\" (UID: \"702456d4-256d-4792-bfb7-0389c6aa9726\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" Nov 27 17:24:32 crc kubenswrapper[4792]: I1127 17:24:32.975915 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" Nov 27 17:24:33 crc kubenswrapper[4792]: I1127 17:24:33.425843 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8"] Nov 27 17:24:33 crc kubenswrapper[4792]: W1127 17:24:33.432609 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod702456d4_256d_4792_bfb7_0389c6aa9726.slice/crio-b48f4b0e6957a441c07d8003581459994e8aa359a65b1f500bcba972e6f1518b WatchSource:0}: Error finding container b48f4b0e6957a441c07d8003581459994e8aa359a65b1f500bcba972e6f1518b: Status 404 returned error can't find the container with id b48f4b0e6957a441c07d8003581459994e8aa359a65b1f500bcba972e6f1518b Nov 27 17:24:34 crc kubenswrapper[4792]: I1127 17:24:34.144403 4792 generic.go:334] "Generic (PLEG): container finished" podID="702456d4-256d-4792-bfb7-0389c6aa9726" containerID="ea0ca9a4621b85c174c8e86bc7d29ce8826613a6ced56d781c51b497f87dc5ec" exitCode=0 Nov 27 17:24:34 crc kubenswrapper[4792]: I1127 17:24:34.144456 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" event={"ID":"702456d4-256d-4792-bfb7-0389c6aa9726","Type":"ContainerDied","Data":"ea0ca9a4621b85c174c8e86bc7d29ce8826613a6ced56d781c51b497f87dc5ec"} Nov 27 17:24:34 crc kubenswrapper[4792]: I1127 17:24:34.144810 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" event={"ID":"702456d4-256d-4792-bfb7-0389c6aa9726","Type":"ContainerStarted","Data":"b48f4b0e6957a441c07d8003581459994e8aa359a65b1f500bcba972e6f1518b"} Nov 27 17:24:34 crc kubenswrapper[4792]: I1127 17:24:34.994658 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r9c29"] Nov 27 17:24:34 crc kubenswrapper[4792]: I1127 17:24:34.996588 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:35 crc kubenswrapper[4792]: I1127 17:24:35.024731 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9c29"] Nov 27 17:24:35 crc kubenswrapper[4792]: I1127 17:24:35.059343 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd1e579-17e5-4eba-afc9-292c34ea92df-utilities\") pod \"redhat-operators-r9c29\" (UID: \"6cd1e579-17e5-4eba-afc9-292c34ea92df\") " pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:35 crc kubenswrapper[4792]: I1127 17:24:35.059495 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z922z\" (UniqueName: \"kubernetes.io/projected/6cd1e579-17e5-4eba-afc9-292c34ea92df-kube-api-access-z922z\") pod \"redhat-operators-r9c29\" (UID: \"6cd1e579-17e5-4eba-afc9-292c34ea92df\") " pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:35 crc kubenswrapper[4792]: I1127 17:24:35.059540 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd1e579-17e5-4eba-afc9-292c34ea92df-catalog-content\") pod \"redhat-operators-r9c29\" (UID: \"6cd1e579-17e5-4eba-afc9-292c34ea92df\") " pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:35 crc kubenswrapper[4792]: I1127 17:24:35.160915 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z922z\" (UniqueName: \"kubernetes.io/projected/6cd1e579-17e5-4eba-afc9-292c34ea92df-kube-api-access-z922z\") pod \"redhat-operators-r9c29\" (UID: \"6cd1e579-17e5-4eba-afc9-292c34ea92df\") " pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:35 crc kubenswrapper[4792]: I1127 17:24:35.160980 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd1e579-17e5-4eba-afc9-292c34ea92df-catalog-content\") pod \"redhat-operators-r9c29\" (UID: \"6cd1e579-17e5-4eba-afc9-292c34ea92df\") " pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:35 crc kubenswrapper[4792]: I1127 17:24:35.161038 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd1e579-17e5-4eba-afc9-292c34ea92df-utilities\") pod \"redhat-operators-r9c29\" (UID: \"6cd1e579-17e5-4eba-afc9-292c34ea92df\") " pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:35 crc kubenswrapper[4792]: I1127 17:24:35.161792 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd1e579-17e5-4eba-afc9-292c34ea92df-utilities\") pod \"redhat-operators-r9c29\" (UID: \"6cd1e579-17e5-4eba-afc9-292c34ea92df\") " pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:35 crc kubenswrapper[4792]: I1127 17:24:35.162370 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd1e579-17e5-4eba-afc9-292c34ea92df-catalog-content\") pod \"redhat-operators-r9c29\" (UID: \"6cd1e579-17e5-4eba-afc9-292c34ea92df\") " pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:35 crc kubenswrapper[4792]: I1127 17:24:35.180343 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z922z\" (UniqueName: \"kubernetes.io/projected/6cd1e579-17e5-4eba-afc9-292c34ea92df-kube-api-access-z922z\") pod \"redhat-operators-r9c29\" (UID: \"6cd1e579-17e5-4eba-afc9-292c34ea92df\") " pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:35 crc kubenswrapper[4792]: I1127 17:24:35.318756 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:35 crc kubenswrapper[4792]: I1127 17:24:35.679124 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9c29"] Nov 27 17:24:35 crc kubenswrapper[4792]: W1127 17:24:35.684570 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cd1e579_17e5_4eba_afc9_292c34ea92df.slice/crio-e0725d684758ad6803cba48affc56476f8c56b7698bd8f29fcaf9a362b7b4a7a WatchSource:0}: Error finding container e0725d684758ad6803cba48affc56476f8c56b7698bd8f29fcaf9a362b7b4a7a: Status 404 returned error can't find the container with id e0725d684758ad6803cba48affc56476f8c56b7698bd8f29fcaf9a362b7b4a7a Nov 27 17:24:36 crc kubenswrapper[4792]: I1127 17:24:36.158204 4792 generic.go:334] "Generic (PLEG): container finished" podID="702456d4-256d-4792-bfb7-0389c6aa9726" containerID="420e62a8b1159b69ce086f19b82622d4b454510b01c963602598c83ee42f3bea" exitCode=0 Nov 27 17:24:36 crc kubenswrapper[4792]: I1127 17:24:36.158272 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" event={"ID":"702456d4-256d-4792-bfb7-0389c6aa9726","Type":"ContainerDied","Data":"420e62a8b1159b69ce086f19b82622d4b454510b01c963602598c83ee42f3bea"} Nov 27 17:24:36 crc kubenswrapper[4792]: I1127 17:24:36.161459 4792 generic.go:334] "Generic (PLEG): container finished" podID="6cd1e579-17e5-4eba-afc9-292c34ea92df" containerID="1cc8a30a59a0f8068b96a4a941f1cd2189791894e51f4794824332d4255eb64b" exitCode=0 Nov 27 17:24:36 crc kubenswrapper[4792]: I1127 17:24:36.161498 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9c29" event={"ID":"6cd1e579-17e5-4eba-afc9-292c34ea92df","Type":"ContainerDied","Data":"1cc8a30a59a0f8068b96a4a941f1cd2189791894e51f4794824332d4255eb64b"} Nov 27 17:24:36 crc kubenswrapper[4792]: I1127 17:24:36.161523 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9c29" event={"ID":"6cd1e579-17e5-4eba-afc9-292c34ea92df","Type":"ContainerStarted","Data":"e0725d684758ad6803cba48affc56476f8c56b7698bd8f29fcaf9a362b7b4a7a"} Nov 27 17:24:37 crc kubenswrapper[4792]: E1127 17:24:37.428990 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod702456d4_256d_4792_bfb7_0389c6aa9726.slice/crio-ed6f5028d1adfaca91604b37fb24c2dcdbd52973af8221667075f0abd4eda655.scope\": RecentStats: unable to find data in memory cache]" Nov 27 17:24:38 crc kubenswrapper[4792]: I1127 17:24:38.468791 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:24:38 crc kubenswrapper[4792]: I1127 17:24:38.469194 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:24:38 crc kubenswrapper[4792]: I1127 17:24:38.485957 4792 generic.go:334] "Generic (PLEG): container finished" podID="702456d4-256d-4792-bfb7-0389c6aa9726" containerID="ed6f5028d1adfaca91604b37fb24c2dcdbd52973af8221667075f0abd4eda655" exitCode=0 Nov 27 17:24:38 crc kubenswrapper[4792]: I1127 17:24:38.486038 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" event={"ID":"702456d4-256d-4792-bfb7-0389c6aa9726","Type":"ContainerDied","Data":"ed6f5028d1adfaca91604b37fb24c2dcdbd52973af8221667075f0abd4eda655"} Nov 27 17:24:38 crc kubenswrapper[4792]: I1127 17:24:38.488500 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9c29" event={"ID":"6cd1e579-17e5-4eba-afc9-292c34ea92df","Type":"ContainerStarted","Data":"281d28fab219385dcfe01e0131f8ab3ebb9595332446958233e3eed591868511"} Nov 27 17:24:39 crc kubenswrapper[4792]: I1127 17:24:39.505164 4792 generic.go:334] "Generic (PLEG): container finished" podID="6cd1e579-17e5-4eba-afc9-292c34ea92df" containerID="281d28fab219385dcfe01e0131f8ab3ebb9595332446958233e3eed591868511" exitCode=0 Nov 27 17:24:39 crc kubenswrapper[4792]: I1127 17:24:39.505272 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9c29" event={"ID":"6cd1e579-17e5-4eba-afc9-292c34ea92df","Type":"ContainerDied","Data":"281d28fab219385dcfe01e0131f8ab3ebb9595332446958233e3eed591868511"} Nov 27 17:24:39 crc kubenswrapper[4792]: I1127 17:24:39.505623 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9c29" event={"ID":"6cd1e579-17e5-4eba-afc9-292c34ea92df","Type":"ContainerStarted","Data":"46cfb3e82f3aa40d9549740eb0c7f64d4f541a3521d6da56f05c12bfb5aff6d2"} Nov 27 17:24:39 crc kubenswrapper[4792]: I1127 17:24:39.534090 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r9c29" podStartSLOduration=2.428766294 podStartE2EDuration="5.534055722s" podCreationTimestamp="2025-11-27 17:24:34 +0000 UTC" firstStartedPulling="2025-11-27 17:24:36.162500326 +0000 UTC m=+898.505326654" lastFinishedPulling="2025-11-27 17:24:39.267789734 +0000 UTC m=+901.610616082" observedRunningTime="2025-11-27 17:24:39.529755875 +0000 UTC m=+901.872582213" watchObservedRunningTime="2025-11-27 17:24:39.534055722 +0000 UTC m=+901.876882040" Nov 27 17:24:39 crc kubenswrapper[4792]: I1127 17:24:39.942276 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" Nov 27 17:24:40 crc kubenswrapper[4792]: I1127 17:24:40.095154 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srrfc\" (UniqueName: \"kubernetes.io/projected/702456d4-256d-4792-bfb7-0389c6aa9726-kube-api-access-srrfc\") pod \"702456d4-256d-4792-bfb7-0389c6aa9726\" (UID: \"702456d4-256d-4792-bfb7-0389c6aa9726\") " Nov 27 17:24:40 crc kubenswrapper[4792]: I1127 17:24:40.095244 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/702456d4-256d-4792-bfb7-0389c6aa9726-util\") pod \"702456d4-256d-4792-bfb7-0389c6aa9726\" (UID: \"702456d4-256d-4792-bfb7-0389c6aa9726\") " Nov 27 17:24:40 crc kubenswrapper[4792]: I1127 17:24:40.095275 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/702456d4-256d-4792-bfb7-0389c6aa9726-bundle\") pod \"702456d4-256d-4792-bfb7-0389c6aa9726\" (UID: \"702456d4-256d-4792-bfb7-0389c6aa9726\") " Nov 27 17:24:40 crc kubenswrapper[4792]: I1127 17:24:40.096139 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/702456d4-256d-4792-bfb7-0389c6aa9726-bundle" (OuterVolumeSpecName: "bundle") pod "702456d4-256d-4792-bfb7-0389c6aa9726" (UID: "702456d4-256d-4792-bfb7-0389c6aa9726"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:24:40 crc kubenswrapper[4792]: I1127 17:24:40.107940 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/702456d4-256d-4792-bfb7-0389c6aa9726-kube-api-access-srrfc" (OuterVolumeSpecName: "kube-api-access-srrfc") pod "702456d4-256d-4792-bfb7-0389c6aa9726" (UID: "702456d4-256d-4792-bfb7-0389c6aa9726"). InnerVolumeSpecName "kube-api-access-srrfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:24:40 crc kubenswrapper[4792]: I1127 17:24:40.112265 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/702456d4-256d-4792-bfb7-0389c6aa9726-util" (OuterVolumeSpecName: "util") pod "702456d4-256d-4792-bfb7-0389c6aa9726" (UID: "702456d4-256d-4792-bfb7-0389c6aa9726"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:24:40 crc kubenswrapper[4792]: I1127 17:24:40.196866 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srrfc\" (UniqueName: \"kubernetes.io/projected/702456d4-256d-4792-bfb7-0389c6aa9726-kube-api-access-srrfc\") on node \"crc\" DevicePath \"\"" Nov 27 17:24:40 crc kubenswrapper[4792]: I1127 17:24:40.196905 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/702456d4-256d-4792-bfb7-0389c6aa9726-util\") on node \"crc\" DevicePath \"\"" Nov 27 17:24:40 crc kubenswrapper[4792]: I1127 17:24:40.196914 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/702456d4-256d-4792-bfb7-0389c6aa9726-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:24:40 crc kubenswrapper[4792]: I1127 17:24:40.517317 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" event={"ID":"702456d4-256d-4792-bfb7-0389c6aa9726","Type":"ContainerDied","Data":"b48f4b0e6957a441c07d8003581459994e8aa359a65b1f500bcba972e6f1518b"} Nov 27 17:24:40 crc kubenswrapper[4792]: I1127 17:24:40.517354 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b48f4b0e6957a441c07d8003581459994e8aa359a65b1f500bcba972e6f1518b" Nov 27 17:24:40 crc kubenswrapper[4792]: I1127 17:24:40.517322 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.044514 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-j62cc"] Nov 27 17:24:45 crc kubenswrapper[4792]: E1127 17:24:45.045235 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="702456d4-256d-4792-bfb7-0389c6aa9726" containerName="extract" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.045254 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="702456d4-256d-4792-bfb7-0389c6aa9726" containerName="extract" Nov 27 17:24:45 crc kubenswrapper[4792]: E1127 17:24:45.045294 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="702456d4-256d-4792-bfb7-0389c6aa9726" containerName="util" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.045305 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="702456d4-256d-4792-bfb7-0389c6aa9726" containerName="util" Nov 27 17:24:45 crc kubenswrapper[4792]: E1127 17:24:45.045328 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="702456d4-256d-4792-bfb7-0389c6aa9726" containerName="pull" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.045339 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="702456d4-256d-4792-bfb7-0389c6aa9726" containerName="pull" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.045529 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="702456d4-256d-4792-bfb7-0389c6aa9726" containerName="extract" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.046359 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-j62cc" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.049103 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.049281 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.054144 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-j62cc"] Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.090813 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-ptxrs" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.193768 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5jd4\" (UniqueName: \"kubernetes.io/projected/ded38fca-6b87-471c-ac68-423a6963dca6-kube-api-access-q5jd4\") pod \"nmstate-operator-5b5b58f5c8-j62cc\" (UID: \"ded38fca-6b87-471c-ac68-423a6963dca6\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-j62cc" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.295035 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5jd4\" (UniqueName: \"kubernetes.io/projected/ded38fca-6b87-471c-ac68-423a6963dca6-kube-api-access-q5jd4\") pod \"nmstate-operator-5b5b58f5c8-j62cc\" (UID: \"ded38fca-6b87-471c-ac68-423a6963dca6\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-j62cc" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.319297 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5jd4\" (UniqueName: \"kubernetes.io/projected/ded38fca-6b87-471c-ac68-423a6963dca6-kube-api-access-q5jd4\") pod \"nmstate-operator-5b5b58f5c8-j62cc\" (UID: \"ded38fca-6b87-471c-ac68-423a6963dca6\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-j62cc" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.319547 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.320619 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.374202 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.411033 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-j62cc" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.601787 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:45 crc kubenswrapper[4792]: I1127 17:24:45.813073 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-j62cc"] Nov 27 17:24:45 crc kubenswrapper[4792]: W1127 17:24:45.813581 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podded38fca_6b87_471c_ac68_423a6963dca6.slice/crio-915d60a23864c689f9e02628acdbbf85f0ca9b4731575e23d95c7382aa501a08 WatchSource:0}: Error finding container 915d60a23864c689f9e02628acdbbf85f0ca9b4731575e23d95c7382aa501a08: Status 404 returned error can't find the container with id 915d60a23864c689f9e02628acdbbf85f0ca9b4731575e23d95c7382aa501a08 Nov 27 17:24:46 crc kubenswrapper[4792]: I1127 17:24:46.562583 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-j62cc" event={"ID":"ded38fca-6b87-471c-ac68-423a6963dca6","Type":"ContainerStarted","Data":"915d60a23864c689f9e02628acdbbf85f0ca9b4731575e23d95c7382aa501a08"} Nov 27 17:24:47 crc kubenswrapper[4792]: I1127 17:24:47.383868 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9c29"] Nov 27 17:24:48 crc kubenswrapper[4792]: I1127 17:24:48.578977 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-j62cc" event={"ID":"ded38fca-6b87-471c-ac68-423a6963dca6","Type":"ContainerStarted","Data":"0e75da4df70bb0585356476ada603b400f5bbdea6ef389e079771687387ef17f"} Nov 27 17:24:48 crc kubenswrapper[4792]: I1127 17:24:48.579164 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r9c29" podUID="6cd1e579-17e5-4eba-afc9-292c34ea92df" containerName="registry-server" containerID="cri-o://46cfb3e82f3aa40d9549740eb0c7f64d4f541a3521d6da56f05c12bfb5aff6d2" gracePeriod=2 Nov 27 17:24:48 crc kubenswrapper[4792]: I1127 17:24:48.600733 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-j62cc" podStartSLOduration=1.2905934430000001 podStartE2EDuration="3.600716625s" podCreationTimestamp="2025-11-27 17:24:45 +0000 UTC" firstStartedPulling="2025-11-27 17:24:45.816720441 +0000 UTC m=+908.159546759" lastFinishedPulling="2025-11-27 17:24:48.126843613 +0000 UTC m=+910.469669941" observedRunningTime="2025-11-27 17:24:48.597189068 +0000 UTC m=+910.940015426" watchObservedRunningTime="2025-11-27 17:24:48.600716625 +0000 UTC m=+910.943542943" Nov 27 17:24:48 crc kubenswrapper[4792]: I1127 17:24:48.942293 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.066950 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z922z\" (UniqueName: \"kubernetes.io/projected/6cd1e579-17e5-4eba-afc9-292c34ea92df-kube-api-access-z922z\") pod \"6cd1e579-17e5-4eba-afc9-292c34ea92df\" (UID: \"6cd1e579-17e5-4eba-afc9-292c34ea92df\") " Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.067104 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd1e579-17e5-4eba-afc9-292c34ea92df-utilities\") pod \"6cd1e579-17e5-4eba-afc9-292c34ea92df\" (UID: \"6cd1e579-17e5-4eba-afc9-292c34ea92df\") " Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.067150 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd1e579-17e5-4eba-afc9-292c34ea92df-catalog-content\") pod \"6cd1e579-17e5-4eba-afc9-292c34ea92df\" (UID: \"6cd1e579-17e5-4eba-afc9-292c34ea92df\") " Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.067957 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cd1e579-17e5-4eba-afc9-292c34ea92df-utilities" (OuterVolumeSpecName: "utilities") pod "6cd1e579-17e5-4eba-afc9-292c34ea92df" (UID: "6cd1e579-17e5-4eba-afc9-292c34ea92df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.073133 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd1e579-17e5-4eba-afc9-292c34ea92df-kube-api-access-z922z" (OuterVolumeSpecName: "kube-api-access-z922z") pod "6cd1e579-17e5-4eba-afc9-292c34ea92df" (UID: "6cd1e579-17e5-4eba-afc9-292c34ea92df"). InnerVolumeSpecName "kube-api-access-z922z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.168549 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z922z\" (UniqueName: \"kubernetes.io/projected/6cd1e579-17e5-4eba-afc9-292c34ea92df-kube-api-access-z922z\") on node \"crc\" DevicePath \"\"" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.168582 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd1e579-17e5-4eba-afc9-292c34ea92df-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.178854 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cd1e579-17e5-4eba-afc9-292c34ea92df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cd1e579-17e5-4eba-afc9-292c34ea92df" (UID: "6cd1e579-17e5-4eba-afc9-292c34ea92df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.269493 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd1e579-17e5-4eba-afc9-292c34ea92df-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.587668 4792 generic.go:334] "Generic (PLEG): container finished" podID="6cd1e579-17e5-4eba-afc9-292c34ea92df" containerID="46cfb3e82f3aa40d9549740eb0c7f64d4f541a3521d6da56f05c12bfb5aff6d2" exitCode=0 Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.587862 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9c29" event={"ID":"6cd1e579-17e5-4eba-afc9-292c34ea92df","Type":"ContainerDied","Data":"46cfb3e82f3aa40d9549740eb0c7f64d4f541a3521d6da56f05c12bfb5aff6d2"} Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.588773 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9c29" event={"ID":"6cd1e579-17e5-4eba-afc9-292c34ea92df","Type":"ContainerDied","Data":"e0725d684758ad6803cba48affc56476f8c56b7698bd8f29fcaf9a362b7b4a7a"} Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.587968 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9c29" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.588895 4792 scope.go:117] "RemoveContainer" containerID="46cfb3e82f3aa40d9549740eb0c7f64d4f541a3521d6da56f05c12bfb5aff6d2" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.623855 4792 scope.go:117] "RemoveContainer" containerID="281d28fab219385dcfe01e0131f8ab3ebb9595332446958233e3eed591868511" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.658785 4792 scope.go:117] "RemoveContainer" containerID="1cc8a30a59a0f8068b96a4a941f1cd2189791894e51f4794824332d4255eb64b" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.698799 4792 scope.go:117] "RemoveContainer" containerID="46cfb3e82f3aa40d9549740eb0c7f64d4f541a3521d6da56f05c12bfb5aff6d2" Nov 27 17:24:49 crc kubenswrapper[4792]: E1127 17:24:49.703743 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46cfb3e82f3aa40d9549740eb0c7f64d4f541a3521d6da56f05c12bfb5aff6d2\": container with ID starting with 46cfb3e82f3aa40d9549740eb0c7f64d4f541a3521d6da56f05c12bfb5aff6d2 not found: ID does not exist" containerID="46cfb3e82f3aa40d9549740eb0c7f64d4f541a3521d6da56f05c12bfb5aff6d2" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.703783 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46cfb3e82f3aa40d9549740eb0c7f64d4f541a3521d6da56f05c12bfb5aff6d2"} err="failed to get container status \"46cfb3e82f3aa40d9549740eb0c7f64d4f541a3521d6da56f05c12bfb5aff6d2\": rpc error: code = NotFound desc = could not find container \"46cfb3e82f3aa40d9549740eb0c7f64d4f541a3521d6da56f05c12bfb5aff6d2\": container with ID starting with 46cfb3e82f3aa40d9549740eb0c7f64d4f541a3521d6da56f05c12bfb5aff6d2 not found: ID does not exist" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.703853 4792 scope.go:117] "RemoveContainer" containerID="281d28fab219385dcfe01e0131f8ab3ebb9595332446958233e3eed591868511" Nov 27 17:24:49 crc kubenswrapper[4792]: E1127 17:24:49.704095 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"281d28fab219385dcfe01e0131f8ab3ebb9595332446958233e3eed591868511\": container with ID starting with 281d28fab219385dcfe01e0131f8ab3ebb9595332446958233e3eed591868511 not found: ID does not exist" containerID="281d28fab219385dcfe01e0131f8ab3ebb9595332446958233e3eed591868511" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.704114 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"281d28fab219385dcfe01e0131f8ab3ebb9595332446958233e3eed591868511"} err="failed to get container status \"281d28fab219385dcfe01e0131f8ab3ebb9595332446958233e3eed591868511\": rpc error: code = NotFound desc = could not find container \"281d28fab219385dcfe01e0131f8ab3ebb9595332446958233e3eed591868511\": container with ID starting with 281d28fab219385dcfe01e0131f8ab3ebb9595332446958233e3eed591868511 not found: ID does not exist" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.704127 4792 scope.go:117] "RemoveContainer" containerID="1cc8a30a59a0f8068b96a4a941f1cd2189791894e51f4794824332d4255eb64b" Nov 27 17:24:49 crc kubenswrapper[4792]: E1127 17:24:49.705734 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cc8a30a59a0f8068b96a4a941f1cd2189791894e51f4794824332d4255eb64b\": container with ID starting with 1cc8a30a59a0f8068b96a4a941f1cd2189791894e51f4794824332d4255eb64b not found: ID does not exist" containerID="1cc8a30a59a0f8068b96a4a941f1cd2189791894e51f4794824332d4255eb64b" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.705763 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cc8a30a59a0f8068b96a4a941f1cd2189791894e51f4794824332d4255eb64b"} err="failed to get container status \"1cc8a30a59a0f8068b96a4a941f1cd2189791894e51f4794824332d4255eb64b\": rpc error: code = NotFound desc = could not find container \"1cc8a30a59a0f8068b96a4a941f1cd2189791894e51f4794824332d4255eb64b\": container with ID starting with 1cc8a30a59a0f8068b96a4a941f1cd2189791894e51f4794824332d4255eb64b not found: ID does not exist" Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.733692 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9c29"] Nov 27 17:24:49 crc kubenswrapper[4792]: I1127 17:24:49.751456 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r9c29"] Nov 27 17:24:50 crc kubenswrapper[4792]: I1127 17:24:50.704401 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cd1e579-17e5-4eba-afc9-292c34ea92df" path="/var/lib/kubelet/pods/6cd1e579-17e5-4eba-afc9-292c34ea92df/volumes" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.189240 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mn896"] Nov 27 17:24:51 crc kubenswrapper[4792]: E1127 17:24:51.189903 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd1e579-17e5-4eba-afc9-292c34ea92df" containerName="extract-utilities" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.189919 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd1e579-17e5-4eba-afc9-292c34ea92df" containerName="extract-utilities" Nov 27 17:24:51 crc kubenswrapper[4792]: E1127 17:24:51.189937 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd1e579-17e5-4eba-afc9-292c34ea92df" containerName="registry-server" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.189946 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd1e579-17e5-4eba-afc9-292c34ea92df" containerName="registry-server" Nov 27 17:24:51 crc kubenswrapper[4792]: E1127 17:24:51.189963 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd1e579-17e5-4eba-afc9-292c34ea92df" containerName="extract-content" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.189971 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd1e579-17e5-4eba-afc9-292c34ea92df" containerName="extract-content" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.190136 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd1e579-17e5-4eba-afc9-292c34ea92df" containerName="registry-server" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.191582 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mn896" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.214196 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mn896"] Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.299551 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96336823-3199-417f-a9c1-d1e7571c2e1d-utilities\") pod \"community-operators-mn896\" (UID: \"96336823-3199-417f-a9c1-d1e7571c2e1d\") " pod="openshift-marketplace/community-operators-mn896" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.299616 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96336823-3199-417f-a9c1-d1e7571c2e1d-catalog-content\") pod \"community-operators-mn896\" (UID: \"96336823-3199-417f-a9c1-d1e7571c2e1d\") " pod="openshift-marketplace/community-operators-mn896" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.299718 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jnr9\" (UniqueName: \"kubernetes.io/projected/96336823-3199-417f-a9c1-d1e7571c2e1d-kube-api-access-6jnr9\") pod \"community-operators-mn896\" (UID: \"96336823-3199-417f-a9c1-d1e7571c2e1d\") " pod="openshift-marketplace/community-operators-mn896" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.401435 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96336823-3199-417f-a9c1-d1e7571c2e1d-utilities\") pod \"community-operators-mn896\" (UID: \"96336823-3199-417f-a9c1-d1e7571c2e1d\") " pod="openshift-marketplace/community-operators-mn896" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.401493 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96336823-3199-417f-a9c1-d1e7571c2e1d-catalog-content\") pod \"community-operators-mn896\" (UID: \"96336823-3199-417f-a9c1-d1e7571c2e1d\") " pod="openshift-marketplace/community-operators-mn896" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.401552 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jnr9\" (UniqueName: \"kubernetes.io/projected/96336823-3199-417f-a9c1-d1e7571c2e1d-kube-api-access-6jnr9\") pod \"community-operators-mn896\" (UID: \"96336823-3199-417f-a9c1-d1e7571c2e1d\") " pod="openshift-marketplace/community-operators-mn896" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.402080 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96336823-3199-417f-a9c1-d1e7571c2e1d-utilities\") pod \"community-operators-mn896\" (UID: \"96336823-3199-417f-a9c1-d1e7571c2e1d\") " pod="openshift-marketplace/community-operators-mn896" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.402137 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96336823-3199-417f-a9c1-d1e7571c2e1d-catalog-content\") pod \"community-operators-mn896\" (UID: \"96336823-3199-417f-a9c1-d1e7571c2e1d\") " pod="openshift-marketplace/community-operators-mn896" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.420868 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jnr9\" (UniqueName: \"kubernetes.io/projected/96336823-3199-417f-a9c1-d1e7571c2e1d-kube-api-access-6jnr9\") pod \"community-operators-mn896\" (UID: \"96336823-3199-417f-a9c1-d1e7571c2e1d\") " pod="openshift-marketplace/community-operators-mn896" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.509347 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mn896" Nov 27 17:24:51 crc kubenswrapper[4792]: I1127 17:24:51.997013 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mn896"] Nov 27 17:24:52 crc kubenswrapper[4792]: W1127 17:24:52.009228 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96336823_3199_417f_a9c1_d1e7571c2e1d.slice/crio-e3ab3a42ec125fbf67df4a4a9cafe70a537a4492358fb0db88e533ac60174d07 WatchSource:0}: Error finding container e3ab3a42ec125fbf67df4a4a9cafe70a537a4492358fb0db88e533ac60174d07: Status 404 returned error can't find the container with id e3ab3a42ec125fbf67df4a4a9cafe70a537a4492358fb0db88e533ac60174d07 Nov 27 17:24:52 crc kubenswrapper[4792]: I1127 17:24:52.619848 4792 generic.go:334] "Generic (PLEG): container finished" podID="96336823-3199-417f-a9c1-d1e7571c2e1d" containerID="c3a6948965400578eca0fe0bad434775e29944ccb33bcb36b399cbc212af4e86" exitCode=0 Nov 27 17:24:52 crc kubenswrapper[4792]: I1127 17:24:52.619906 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mn896" event={"ID":"96336823-3199-417f-a9c1-d1e7571c2e1d","Type":"ContainerDied","Data":"c3a6948965400578eca0fe0bad434775e29944ccb33bcb36b399cbc212af4e86"} Nov 27 17:24:52 crc kubenswrapper[4792]: I1127 17:24:52.619936 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mn896" event={"ID":"96336823-3199-417f-a9c1-d1e7571c2e1d","Type":"ContainerStarted","Data":"e3ab3a42ec125fbf67df4a4a9cafe70a537a4492358fb0db88e533ac60174d07"} Nov 27 17:24:52 crc kubenswrapper[4792]: I1127 17:24:52.623758 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 17:24:53 crc kubenswrapper[4792]: I1127 17:24:53.628834 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mn896" event={"ID":"96336823-3199-417f-a9c1-d1e7571c2e1d","Type":"ContainerStarted","Data":"c7d035e52c34fa7a0f73de43e14930140c4f5f8a19779d257994edc800132ee6"} Nov 27 17:24:53 crc kubenswrapper[4792]: I1127 17:24:53.895285 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-r6wgp"] Nov 27 17:24:53 crc kubenswrapper[4792]: I1127 17:24:53.896675 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-r6wgp" Nov 27 17:24:53 crc kubenswrapper[4792]: I1127 17:24:53.898378 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-t56cs" Nov 27 17:24:53 crc kubenswrapper[4792]: I1127 17:24:53.903802 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs"] Nov 27 17:24:53 crc kubenswrapper[4792]: I1127 17:24:53.905083 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs" Nov 27 17:24:53 crc kubenswrapper[4792]: I1127 17:24:53.907007 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 27 17:24:53 crc kubenswrapper[4792]: I1127 17:24:53.908786 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-r6wgp"] Nov 27 17:24:53 crc kubenswrapper[4792]: I1127 17:24:53.934671 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs"] Nov 27 17:24:53 crc kubenswrapper[4792]: I1127 17:24:53.944787 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/da86e440-8b68-4f21-bc7b-5cc71334ce5a-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-lmhfs\" (UID: \"da86e440-8b68-4f21-bc7b-5cc71334ce5a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs" Nov 27 17:24:53 crc kubenswrapper[4792]: I1127 17:24:53.944926 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg6wq\" (UniqueName: \"kubernetes.io/projected/a21d5243-150d-488b-9cf2-ab95ee2732e6-kube-api-access-lg6wq\") pod \"nmstate-metrics-7f946cbc9-r6wgp\" (UID: \"a21d5243-150d-488b-9cf2-ab95ee2732e6\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-r6wgp" Nov 27 17:24:53 crc kubenswrapper[4792]: I1127 17:24:53.945027 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfnph\" (UniqueName: \"kubernetes.io/projected/da86e440-8b68-4f21-bc7b-5cc71334ce5a-kube-api-access-wfnph\") pod \"nmstate-webhook-5f6d4c5ccb-lmhfs\" (UID: \"da86e440-8b68-4f21-bc7b-5cc71334ce5a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs" Nov 27 17:24:53 crc kubenswrapper[4792]: I1127 17:24:53.947929 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-7z5qw"] Nov 27 17:24:53 crc kubenswrapper[4792]: I1127 17:24:53.948831 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.026444 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928"] Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.027272 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.029271 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.029711 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-jwpj8" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.029751 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.038822 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928"] Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.046317 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k6fm\" (UniqueName: \"kubernetes.io/projected/074e28a6-e1f1-43d3-b34a-b2d8c143f8af-kube-api-access-6k6fm\") pod \"nmstate-handler-7z5qw\" (UID: \"074e28a6-e1f1-43d3-b34a-b2d8c143f8af\") " pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.046436 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/074e28a6-e1f1-43d3-b34a-b2d8c143f8af-dbus-socket\") pod \"nmstate-handler-7z5qw\" (UID: \"074e28a6-e1f1-43d3-b34a-b2d8c143f8af\") " pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.046513 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/074e28a6-e1f1-43d3-b34a-b2d8c143f8af-nmstate-lock\") pod \"nmstate-handler-7z5qw\" (UID: \"074e28a6-e1f1-43d3-b34a-b2d8c143f8af\") " pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.046617 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfnph\" (UniqueName: \"kubernetes.io/projected/da86e440-8b68-4f21-bc7b-5cc71334ce5a-kube-api-access-wfnph\") pod \"nmstate-webhook-5f6d4c5ccb-lmhfs\" (UID: \"da86e440-8b68-4f21-bc7b-5cc71334ce5a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.046658 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5131ddcc-b3d4-4df4-9474-19896fb63573-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-th928\" (UID: \"5131ddcc-b3d4-4df4-9474-19896fb63573\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.046725 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/da86e440-8b68-4f21-bc7b-5cc71334ce5a-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-lmhfs\" (UID: \"da86e440-8b68-4f21-bc7b-5cc71334ce5a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.046769 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/074e28a6-e1f1-43d3-b34a-b2d8c143f8af-ovs-socket\") pod \"nmstate-handler-7z5qw\" (UID: \"074e28a6-e1f1-43d3-b34a-b2d8c143f8af\") " pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.046874 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5131ddcc-b3d4-4df4-9474-19896fb63573-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-th928\" (UID: \"5131ddcc-b3d4-4df4-9474-19896fb63573\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.046944 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg6wq\" (UniqueName: \"kubernetes.io/projected/a21d5243-150d-488b-9cf2-ab95ee2732e6-kube-api-access-lg6wq\") pod \"nmstate-metrics-7f946cbc9-r6wgp\" (UID: \"a21d5243-150d-488b-9cf2-ab95ee2732e6\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-r6wgp" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.046964 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffwxw\" (UniqueName: \"kubernetes.io/projected/5131ddcc-b3d4-4df4-9474-19896fb63573-kube-api-access-ffwxw\") pod \"nmstate-console-plugin-7fbb5f6569-th928\" (UID: \"5131ddcc-b3d4-4df4-9474-19896fb63573\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.053688 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/da86e440-8b68-4f21-bc7b-5cc71334ce5a-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-lmhfs\" (UID: \"da86e440-8b68-4f21-bc7b-5cc71334ce5a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.068302 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg6wq\" (UniqueName: \"kubernetes.io/projected/a21d5243-150d-488b-9cf2-ab95ee2732e6-kube-api-access-lg6wq\") pod \"nmstate-metrics-7f946cbc9-r6wgp\" (UID: \"a21d5243-150d-488b-9cf2-ab95ee2732e6\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-r6wgp" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.070203 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfnph\" (UniqueName: \"kubernetes.io/projected/da86e440-8b68-4f21-bc7b-5cc71334ce5a-kube-api-access-wfnph\") pod \"nmstate-webhook-5f6d4c5ccb-lmhfs\" (UID: \"da86e440-8b68-4f21-bc7b-5cc71334ce5a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.148623 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5131ddcc-b3d4-4df4-9474-19896fb63573-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-th928\" (UID: \"5131ddcc-b3d4-4df4-9474-19896fb63573\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.148736 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/074e28a6-e1f1-43d3-b34a-b2d8c143f8af-ovs-socket\") pod \"nmstate-handler-7z5qw\" (UID: \"074e28a6-e1f1-43d3-b34a-b2d8c143f8af\") " pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.148786 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5131ddcc-b3d4-4df4-9474-19896fb63573-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-th928\" (UID: \"5131ddcc-b3d4-4df4-9474-19896fb63573\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928" Nov 27 17:24:54 crc kubenswrapper[4792]: E1127 17:24:54.148811 4792 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.148836 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/074e28a6-e1f1-43d3-b34a-b2d8c143f8af-ovs-socket\") pod \"nmstate-handler-7z5qw\" (UID: \"074e28a6-e1f1-43d3-b34a-b2d8c143f8af\") " pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.148833 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffwxw\" (UniqueName: \"kubernetes.io/projected/5131ddcc-b3d4-4df4-9474-19896fb63573-kube-api-access-ffwxw\") pod \"nmstate-console-plugin-7fbb5f6569-th928\" (UID: \"5131ddcc-b3d4-4df4-9474-19896fb63573\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928" Nov 27 17:24:54 crc kubenswrapper[4792]: E1127 17:24:54.148900 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5131ddcc-b3d4-4df4-9474-19896fb63573-plugin-serving-cert podName:5131ddcc-b3d4-4df4-9474-19896fb63573 nodeName:}" failed. No retries permitted until 2025-11-27 17:24:54.648879394 +0000 UTC m=+916.991705782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/5131ddcc-b3d4-4df4-9474-19896fb63573-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-th928" (UID: "5131ddcc-b3d4-4df4-9474-19896fb63573") : secret "plugin-serving-cert" not found Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.148989 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k6fm\" (UniqueName: \"kubernetes.io/projected/074e28a6-e1f1-43d3-b34a-b2d8c143f8af-kube-api-access-6k6fm\") pod \"nmstate-handler-7z5qw\" (UID: \"074e28a6-e1f1-43d3-b34a-b2d8c143f8af\") " pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.149054 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/074e28a6-e1f1-43d3-b34a-b2d8c143f8af-dbus-socket\") pod \"nmstate-handler-7z5qw\" (UID: \"074e28a6-e1f1-43d3-b34a-b2d8c143f8af\") " pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.149100 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/074e28a6-e1f1-43d3-b34a-b2d8c143f8af-nmstate-lock\") pod \"nmstate-handler-7z5qw\" (UID: \"074e28a6-e1f1-43d3-b34a-b2d8c143f8af\") " pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.149281 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/074e28a6-e1f1-43d3-b34a-b2d8c143f8af-nmstate-lock\") pod \"nmstate-handler-7z5qw\" (UID: \"074e28a6-e1f1-43d3-b34a-b2d8c143f8af\") " pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.149415 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/074e28a6-e1f1-43d3-b34a-b2d8c143f8af-dbus-socket\") pod \"nmstate-handler-7z5qw\" (UID: \"074e28a6-e1f1-43d3-b34a-b2d8c143f8af\") " pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.149668 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5131ddcc-b3d4-4df4-9474-19896fb63573-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-th928\" (UID: \"5131ddcc-b3d4-4df4-9474-19896fb63573\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.180826 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k6fm\" (UniqueName: \"kubernetes.io/projected/074e28a6-e1f1-43d3-b34a-b2d8c143f8af-kube-api-access-6k6fm\") pod \"nmstate-handler-7z5qw\" (UID: \"074e28a6-e1f1-43d3-b34a-b2d8c143f8af\") " pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.199167 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffwxw\" (UniqueName: \"kubernetes.io/projected/5131ddcc-b3d4-4df4-9474-19896fb63573-kube-api-access-ffwxw\") pod \"nmstate-console-plugin-7fbb5f6569-th928\" (UID: \"5131ddcc-b3d4-4df4-9474-19896fb63573\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.237088 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-549b454695-9zzgx"] Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.238220 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.250115 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/547350b1-93f3-451b-9c39-905a201a4af4-console-serving-cert\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.250185 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgvsb\" (UniqueName: \"kubernetes.io/projected/547350b1-93f3-451b-9c39-905a201a4af4-kube-api-access-wgvsb\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.250220 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-console-config\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.250242 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-oauth-serving-cert\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.250277 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-trusted-ca-bundle\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.250362 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/547350b1-93f3-451b-9c39-905a201a4af4-console-oauth-config\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.250428 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-service-ca\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.258848 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-549b454695-9zzgx"] Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.261867 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-r6wgp" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.283213 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.293400 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.352011 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-service-ca\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.352083 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/547350b1-93f3-451b-9c39-905a201a4af4-console-serving-cert\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.352133 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgvsb\" (UniqueName: \"kubernetes.io/projected/547350b1-93f3-451b-9c39-905a201a4af4-kube-api-access-wgvsb\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.352176 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-console-config\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.352199 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-oauth-serving-cert\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.352244 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-trusted-ca-bundle\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.352274 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/547350b1-93f3-451b-9c39-905a201a4af4-console-oauth-config\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.353678 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-service-ca\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.357554 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-oauth-serving-cert\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.357595 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-trusted-ca-bundle\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.358042 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-console-config\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.361254 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/547350b1-93f3-451b-9c39-905a201a4af4-console-oauth-config\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.362219 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/547350b1-93f3-451b-9c39-905a201a4af4-console-serving-cert\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.379039 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgvsb\" (UniqueName: \"kubernetes.io/projected/547350b1-93f3-451b-9c39-905a201a4af4-kube-api-access-wgvsb\") pod \"console-549b454695-9zzgx\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.532157 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-r6wgp"] Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.564940 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.641017 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7z5qw" event={"ID":"074e28a6-e1f1-43d3-b34a-b2d8c143f8af","Type":"ContainerStarted","Data":"058efcba987df11636257bdccbdab73cf21f3e73740f1654522291b543ad564a"} Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.642628 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-r6wgp" event={"ID":"a21d5243-150d-488b-9cf2-ab95ee2732e6","Type":"ContainerStarted","Data":"64ee2c563c6f5167b70abd041d38401bed57622faef55af95f8225989f845edf"} Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.645469 4792 generic.go:334] "Generic (PLEG): container finished" podID="96336823-3199-417f-a9c1-d1e7571c2e1d" containerID="c7d035e52c34fa7a0f73de43e14930140c4f5f8a19779d257994edc800132ee6" exitCode=0 Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.645495 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mn896" event={"ID":"96336823-3199-417f-a9c1-d1e7571c2e1d","Type":"ContainerDied","Data":"c7d035e52c34fa7a0f73de43e14930140c4f5f8a19779d257994edc800132ee6"} Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.648330 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs"] Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.663019 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5131ddcc-b3d4-4df4-9474-19896fb63573-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-th928\" (UID: \"5131ddcc-b3d4-4df4-9474-19896fb63573\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.668569 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5131ddcc-b3d4-4df4-9474-19896fb63573-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-th928\" (UID: \"5131ddcc-b3d4-4df4-9474-19896fb63573\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928" Nov 27 17:24:54 crc kubenswrapper[4792]: I1127 17:24:54.948430 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928" Nov 27 17:24:55 crc kubenswrapper[4792]: I1127 17:24:55.010285 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-549b454695-9zzgx"] Nov 27 17:24:55 crc kubenswrapper[4792]: W1127 17:24:55.027281 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod547350b1_93f3_451b_9c39_905a201a4af4.slice/crio-6a633abfe9dd0549b28f17b10db5fbec7c27b33194b163560a95ca56cd62a430 WatchSource:0}: Error finding container 6a633abfe9dd0549b28f17b10db5fbec7c27b33194b163560a95ca56cd62a430: Status 404 returned error can't find the container with id 6a633abfe9dd0549b28f17b10db5fbec7c27b33194b163560a95ca56cd62a430 Nov 27 17:24:55 crc kubenswrapper[4792]: I1127 17:24:55.424099 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928"] Nov 27 17:24:55 crc kubenswrapper[4792]: W1127 17:24:55.429790 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5131ddcc_b3d4_4df4_9474_19896fb63573.slice/crio-d43399f43ba72a72279e77422c73f13f1f8c94c9e9f2ee802046e09da19eec16 WatchSource:0}: Error finding container d43399f43ba72a72279e77422c73f13f1f8c94c9e9f2ee802046e09da19eec16: Status 404 returned error can't find the container with id d43399f43ba72a72279e77422c73f13f1f8c94c9e9f2ee802046e09da19eec16 Nov 27 17:24:55 crc kubenswrapper[4792]: I1127 17:24:55.653361 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mn896" event={"ID":"96336823-3199-417f-a9c1-d1e7571c2e1d","Type":"ContainerStarted","Data":"8b63b114d27dedbe7f60eba34a74b4349ec19e9790cb59b71c483f7f105dbc7e"} Nov 27 17:24:55 crc kubenswrapper[4792]: I1127 17:24:55.655395 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-549b454695-9zzgx" event={"ID":"547350b1-93f3-451b-9c39-905a201a4af4","Type":"ContainerStarted","Data":"a76c57692388131dbdb6105cf19be36865dca8b8e1c4b79e670ee8e8f064cf6f"} Nov 27 17:24:55 crc kubenswrapper[4792]: I1127 17:24:55.655434 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-549b454695-9zzgx" event={"ID":"547350b1-93f3-451b-9c39-905a201a4af4","Type":"ContainerStarted","Data":"6a633abfe9dd0549b28f17b10db5fbec7c27b33194b163560a95ca56cd62a430"} Nov 27 17:24:55 crc kubenswrapper[4792]: I1127 17:24:55.657607 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs" event={"ID":"da86e440-8b68-4f21-bc7b-5cc71334ce5a","Type":"ContainerStarted","Data":"fd128cee699b1154b0b617d48ab2a153bdaaf58873dc82e5e06cb8920033eb71"} Nov 27 17:24:55 crc kubenswrapper[4792]: I1127 17:24:55.658923 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928" event={"ID":"5131ddcc-b3d4-4df4-9474-19896fb63573","Type":"ContainerStarted","Data":"d43399f43ba72a72279e77422c73f13f1f8c94c9e9f2ee802046e09da19eec16"} Nov 27 17:24:55 crc kubenswrapper[4792]: I1127 17:24:55.673108 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mn896" podStartSLOduration=2.142989756 podStartE2EDuration="4.673088481s" podCreationTimestamp="2025-11-27 17:24:51 +0000 UTC" firstStartedPulling="2025-11-27 17:24:52.623472438 +0000 UTC m=+914.966298776" lastFinishedPulling="2025-11-27 17:24:55.153571183 +0000 UTC m=+917.496397501" observedRunningTime="2025-11-27 17:24:55.669504673 +0000 UTC m=+918.012331011" watchObservedRunningTime="2025-11-27 17:24:55.673088481 +0000 UTC m=+918.015914799" Nov 27 17:24:55 crc kubenswrapper[4792]: I1127 17:24:55.698936 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-549b454695-9zzgx" podStartSLOduration=1.698916808 podStartE2EDuration="1.698916808s" podCreationTimestamp="2025-11-27 17:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:24:55.694596422 +0000 UTC m=+918.037422740" watchObservedRunningTime="2025-11-27 17:24:55.698916808 +0000 UTC m=+918.041743126" Nov 27 17:24:58 crc kubenswrapper[4792]: I1127 17:24:58.695434 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7z5qw" event={"ID":"074e28a6-e1f1-43d3-b34a-b2d8c143f8af","Type":"ContainerStarted","Data":"a32feda49a4ede45b54c94c2915e9826d8f8eb89849686a85d8bd9bc89462819"} Nov 27 17:24:58 crc kubenswrapper[4792]: I1127 17:24:58.695756 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:24:58 crc kubenswrapper[4792]: I1127 17:24:58.702389 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-r6wgp" event={"ID":"a21d5243-150d-488b-9cf2-ab95ee2732e6","Type":"ContainerStarted","Data":"904a7c784a1d8a7edd882635178f40530f5bf8c32cb85f5a88d5973fdd0f2659"} Nov 27 17:24:58 crc kubenswrapper[4792]: I1127 17:24:58.709690 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs" event={"ID":"da86e440-8b68-4f21-bc7b-5cc71334ce5a","Type":"ContainerStarted","Data":"de68fc257e60cbc8ca449d66501cdfbba7e6a41f379073588238cc04b9101e61"} Nov 27 17:24:58 crc kubenswrapper[4792]: I1127 17:24:58.709799 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs" Nov 27 17:24:58 crc kubenswrapper[4792]: I1127 17:24:58.768095 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs" podStartSLOduration=2.760318913 podStartE2EDuration="5.768071313s" podCreationTimestamp="2025-11-27 17:24:53 +0000 UTC" firstStartedPulling="2025-11-27 17:24:54.66224641 +0000 UTC m=+917.005072728" lastFinishedPulling="2025-11-27 17:24:57.66999877 +0000 UTC m=+920.012825128" observedRunningTime="2025-11-27 17:24:58.764343361 +0000 UTC m=+921.107169679" watchObservedRunningTime="2025-11-27 17:24:58.768071313 +0000 UTC m=+921.110897631" Nov 27 17:24:58 crc kubenswrapper[4792]: I1127 17:24:58.782260 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-7z5qw" podStartSLOduration=2.48621449 podStartE2EDuration="5.782245133s" podCreationTimestamp="2025-11-27 17:24:53 +0000 UTC" firstStartedPulling="2025-11-27 17:24:54.379188516 +0000 UTC m=+916.722014834" lastFinishedPulling="2025-11-27 17:24:57.675219119 +0000 UTC m=+920.018045477" observedRunningTime="2025-11-27 17:24:58.781131265 +0000 UTC m=+921.123957593" watchObservedRunningTime="2025-11-27 17:24:58.782245133 +0000 UTC m=+921.125071451" Nov 27 17:25:00 crc kubenswrapper[4792]: I1127 17:25:00.749891 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928" event={"ID":"5131ddcc-b3d4-4df4-9474-19896fb63573","Type":"ContainerStarted","Data":"cb62799e712591d068edd8b21c63ff4ebc33621f92a28735a401af9c1a2dee45"} Nov 27 17:25:00 crc kubenswrapper[4792]: I1127 17:25:00.768918 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-th928" podStartSLOduration=2.993246884 podStartE2EDuration="6.768873209s" podCreationTimestamp="2025-11-27 17:24:54 +0000 UTC" firstStartedPulling="2025-11-27 17:24:55.432805633 +0000 UTC m=+917.775631951" lastFinishedPulling="2025-11-27 17:24:59.208431958 +0000 UTC m=+921.551258276" observedRunningTime="2025-11-27 17:25:00.764913751 +0000 UTC m=+923.107740079" watchObservedRunningTime="2025-11-27 17:25:00.768873209 +0000 UTC m=+923.111699517" Nov 27 17:25:01 crc kubenswrapper[4792]: I1127 17:25:01.510305 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mn896" Nov 27 17:25:01 crc kubenswrapper[4792]: I1127 17:25:01.510760 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mn896" Nov 27 17:25:01 crc kubenswrapper[4792]: I1127 17:25:01.568504 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mn896" Nov 27 17:25:01 crc kubenswrapper[4792]: I1127 17:25:01.762540 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-r6wgp" event={"ID":"a21d5243-150d-488b-9cf2-ab95ee2732e6","Type":"ContainerStarted","Data":"8b4bfe7e8932fc1dd87ebaba15172d0359130327cc6c2e13334d0736e52fa6a8"} Nov 27 17:25:01 crc kubenswrapper[4792]: I1127 17:25:01.784940 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-r6wgp" podStartSLOduration=2.70432761 podStartE2EDuration="8.784918757s" podCreationTimestamp="2025-11-27 17:24:53 +0000 UTC" firstStartedPulling="2025-11-27 17:24:54.550299258 +0000 UTC m=+916.893125576" lastFinishedPulling="2025-11-27 17:25:00.630890405 +0000 UTC m=+922.973716723" observedRunningTime="2025-11-27 17:25:01.78419811 +0000 UTC m=+924.127024458" watchObservedRunningTime="2025-11-27 17:25:01.784918757 +0000 UTC m=+924.127745075" Nov 27 17:25:01 crc kubenswrapper[4792]: I1127 17:25:01.854266 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mn896" Nov 27 17:25:01 crc kubenswrapper[4792]: I1127 17:25:01.909176 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mn896"] Nov 27 17:25:03 crc kubenswrapper[4792]: I1127 17:25:03.784352 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mn896" podUID="96336823-3199-417f-a9c1-d1e7571c2e1d" containerName="registry-server" containerID="cri-o://8b63b114d27dedbe7f60eba34a74b4349ec19e9790cb59b71c483f7f105dbc7e" gracePeriod=2 Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.321255 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-7z5qw" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.374027 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mn896" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.456502 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96336823-3199-417f-a9c1-d1e7571c2e1d-catalog-content\") pod \"96336823-3199-417f-a9c1-d1e7571c2e1d\" (UID: \"96336823-3199-417f-a9c1-d1e7571c2e1d\") " Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.456678 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96336823-3199-417f-a9c1-d1e7571c2e1d-utilities\") pod \"96336823-3199-417f-a9c1-d1e7571c2e1d\" (UID: \"96336823-3199-417f-a9c1-d1e7571c2e1d\") " Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.456807 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jnr9\" (UniqueName: \"kubernetes.io/projected/96336823-3199-417f-a9c1-d1e7571c2e1d-kube-api-access-6jnr9\") pod \"96336823-3199-417f-a9c1-d1e7571c2e1d\" (UID: \"96336823-3199-417f-a9c1-d1e7571c2e1d\") " Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.457646 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96336823-3199-417f-a9c1-d1e7571c2e1d-utilities" (OuterVolumeSpecName: "utilities") pod "96336823-3199-417f-a9c1-d1e7571c2e1d" (UID: "96336823-3199-417f-a9c1-d1e7571c2e1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.463218 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96336823-3199-417f-a9c1-d1e7571c2e1d-kube-api-access-6jnr9" (OuterVolumeSpecName: "kube-api-access-6jnr9") pod "96336823-3199-417f-a9c1-d1e7571c2e1d" (UID: "96336823-3199-417f-a9c1-d1e7571c2e1d"). InnerVolumeSpecName "kube-api-access-6jnr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.520116 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96336823-3199-417f-a9c1-d1e7571c2e1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96336823-3199-417f-a9c1-d1e7571c2e1d" (UID: "96336823-3199-417f-a9c1-d1e7571c2e1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.559721 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96336823-3199-417f-a9c1-d1e7571c2e1d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.559763 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96336823-3199-417f-a9c1-d1e7571c2e1d-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.559774 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jnr9\" (UniqueName: \"kubernetes.io/projected/96336823-3199-417f-a9c1-d1e7571c2e1d-kube-api-access-6jnr9\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.565936 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.565989 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.573370 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.797709 4792 generic.go:334] "Generic (PLEG): container finished" podID="96336823-3199-417f-a9c1-d1e7571c2e1d" containerID="8b63b114d27dedbe7f60eba34a74b4349ec19e9790cb59b71c483f7f105dbc7e" exitCode=0 Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.797829 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mn896" event={"ID":"96336823-3199-417f-a9c1-d1e7571c2e1d","Type":"ContainerDied","Data":"8b63b114d27dedbe7f60eba34a74b4349ec19e9790cb59b71c483f7f105dbc7e"} Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.797890 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mn896" event={"ID":"96336823-3199-417f-a9c1-d1e7571c2e1d","Type":"ContainerDied","Data":"e3ab3a42ec125fbf67df4a4a9cafe70a537a4492358fb0db88e533ac60174d07"} Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.797912 4792 scope.go:117] "RemoveContainer" containerID="8b63b114d27dedbe7f60eba34a74b4349ec19e9790cb59b71c483f7f105dbc7e" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.799816 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mn896" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.802547 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.819908 4792 scope.go:117] "RemoveContainer" containerID="c7d035e52c34fa7a0f73de43e14930140c4f5f8a19779d257994edc800132ee6" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.849101 4792 scope.go:117] "RemoveContainer" containerID="c3a6948965400578eca0fe0bad434775e29944ccb33bcb36b399cbc212af4e86" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.866427 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mn896"] Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.875561 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mn896"] Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.894058 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-54b4d94fcb-zr2dd"] Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.894955 4792 scope.go:117] "RemoveContainer" containerID="8b63b114d27dedbe7f60eba34a74b4349ec19e9790cb59b71c483f7f105dbc7e" Nov 27 17:25:04 crc kubenswrapper[4792]: E1127 17:25:04.904823 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b63b114d27dedbe7f60eba34a74b4349ec19e9790cb59b71c483f7f105dbc7e\": container with ID starting with 8b63b114d27dedbe7f60eba34a74b4349ec19e9790cb59b71c483f7f105dbc7e not found: ID does not exist" containerID="8b63b114d27dedbe7f60eba34a74b4349ec19e9790cb59b71c483f7f105dbc7e" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.904873 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b63b114d27dedbe7f60eba34a74b4349ec19e9790cb59b71c483f7f105dbc7e"} err="failed to get container status \"8b63b114d27dedbe7f60eba34a74b4349ec19e9790cb59b71c483f7f105dbc7e\": rpc error: code = NotFound desc = could not find container \"8b63b114d27dedbe7f60eba34a74b4349ec19e9790cb59b71c483f7f105dbc7e\": container with ID starting with 8b63b114d27dedbe7f60eba34a74b4349ec19e9790cb59b71c483f7f105dbc7e not found: ID does not exist" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.904902 4792 scope.go:117] "RemoveContainer" containerID="c7d035e52c34fa7a0f73de43e14930140c4f5f8a19779d257994edc800132ee6" Nov 27 17:25:04 crc kubenswrapper[4792]: E1127 17:25:04.905415 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7d035e52c34fa7a0f73de43e14930140c4f5f8a19779d257994edc800132ee6\": container with ID starting with c7d035e52c34fa7a0f73de43e14930140c4f5f8a19779d257994edc800132ee6 not found: ID does not exist" containerID="c7d035e52c34fa7a0f73de43e14930140c4f5f8a19779d257994edc800132ee6" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.905482 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7d035e52c34fa7a0f73de43e14930140c4f5f8a19779d257994edc800132ee6"} err="failed to get container status \"c7d035e52c34fa7a0f73de43e14930140c4f5f8a19779d257994edc800132ee6\": rpc error: code = NotFound desc = could not find container \"c7d035e52c34fa7a0f73de43e14930140c4f5f8a19779d257994edc800132ee6\": container with ID starting with c7d035e52c34fa7a0f73de43e14930140c4f5f8a19779d257994edc800132ee6 not found: ID does not exist" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.905528 4792 scope.go:117] "RemoveContainer" containerID="c3a6948965400578eca0fe0bad434775e29944ccb33bcb36b399cbc212af4e86" Nov 27 17:25:04 crc kubenswrapper[4792]: E1127 17:25:04.906794 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3a6948965400578eca0fe0bad434775e29944ccb33bcb36b399cbc212af4e86\": container with ID starting with c3a6948965400578eca0fe0bad434775e29944ccb33bcb36b399cbc212af4e86 not found: ID does not exist" containerID="c3a6948965400578eca0fe0bad434775e29944ccb33bcb36b399cbc212af4e86" Nov 27 17:25:04 crc kubenswrapper[4792]: I1127 17:25:04.906849 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3a6948965400578eca0fe0bad434775e29944ccb33bcb36b399cbc212af4e86"} err="failed to get container status \"c3a6948965400578eca0fe0bad434775e29944ccb33bcb36b399cbc212af4e86\": rpc error: code = NotFound desc = could not find container \"c3a6948965400578eca0fe0bad434775e29944ccb33bcb36b399cbc212af4e86\": container with ID starting with c3a6948965400578eca0fe0bad434775e29944ccb33bcb36b399cbc212af4e86 not found: ID does not exist" Nov 27 17:25:06 crc kubenswrapper[4792]: I1127 17:25:06.698727 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96336823-3199-417f-a9c1-d1e7571c2e1d" path="/var/lib/kubelet/pods/96336823-3199-417f-a9c1-d1e7571c2e1d/volumes" Nov 27 17:25:08 crc kubenswrapper[4792]: I1127 17:25:08.290910 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:25:08 crc kubenswrapper[4792]: I1127 17:25:08.291353 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:25:14 crc kubenswrapper[4792]: I1127 17:25:14.291001 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-lmhfs" Nov 27 17:25:16 crc kubenswrapper[4792]: I1127 17:25:16.972464 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jxp55"] Nov 27 17:25:16 crc kubenswrapper[4792]: E1127 17:25:16.973298 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96336823-3199-417f-a9c1-d1e7571c2e1d" containerName="registry-server" Nov 27 17:25:16 crc kubenswrapper[4792]: I1127 17:25:16.973316 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="96336823-3199-417f-a9c1-d1e7571c2e1d" containerName="registry-server" Nov 27 17:25:16 crc kubenswrapper[4792]: E1127 17:25:16.973333 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96336823-3199-417f-a9c1-d1e7571c2e1d" containerName="extract-utilities" Nov 27 17:25:16 crc kubenswrapper[4792]: I1127 17:25:16.973341 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="96336823-3199-417f-a9c1-d1e7571c2e1d" containerName="extract-utilities" Nov 27 17:25:16 crc kubenswrapper[4792]: E1127 17:25:16.973371 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96336823-3199-417f-a9c1-d1e7571c2e1d" containerName="extract-content" Nov 27 17:25:16 crc kubenswrapper[4792]: I1127 17:25:16.973411 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="96336823-3199-417f-a9c1-d1e7571c2e1d" containerName="extract-content" Nov 27 17:25:16 crc kubenswrapper[4792]: I1127 17:25:16.976463 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="96336823-3199-417f-a9c1-d1e7571c2e1d" containerName="registry-server" Nov 27 17:25:16 crc kubenswrapper[4792]: I1127 17:25:16.979988 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:16 crc kubenswrapper[4792]: I1127 17:25:16.982800 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxp55"] Nov 27 17:25:17 crc kubenswrapper[4792]: I1127 17:25:17.068358 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rlwk\" (UniqueName: \"kubernetes.io/projected/31a0061d-a0a7-4517-84e5-3d01ec405c5b-kube-api-access-9rlwk\") pod \"redhat-marketplace-jxp55\" (UID: \"31a0061d-a0a7-4517-84e5-3d01ec405c5b\") " pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:17 crc kubenswrapper[4792]: I1127 17:25:17.068510 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a0061d-a0a7-4517-84e5-3d01ec405c5b-catalog-content\") pod \"redhat-marketplace-jxp55\" (UID: \"31a0061d-a0a7-4517-84e5-3d01ec405c5b\") " pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:17 crc kubenswrapper[4792]: I1127 17:25:17.068534 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a0061d-a0a7-4517-84e5-3d01ec405c5b-utilities\") pod \"redhat-marketplace-jxp55\" (UID: \"31a0061d-a0a7-4517-84e5-3d01ec405c5b\") " pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:17 crc kubenswrapper[4792]: I1127 17:25:17.169874 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a0061d-a0a7-4517-84e5-3d01ec405c5b-catalog-content\") pod \"redhat-marketplace-jxp55\" (UID: \"31a0061d-a0a7-4517-84e5-3d01ec405c5b\") " pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:17 crc kubenswrapper[4792]: I1127 17:25:17.170170 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a0061d-a0a7-4517-84e5-3d01ec405c5b-utilities\") pod \"redhat-marketplace-jxp55\" (UID: \"31a0061d-a0a7-4517-84e5-3d01ec405c5b\") " pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:17 crc kubenswrapper[4792]: I1127 17:25:17.170320 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rlwk\" (UniqueName: \"kubernetes.io/projected/31a0061d-a0a7-4517-84e5-3d01ec405c5b-kube-api-access-9rlwk\") pod \"redhat-marketplace-jxp55\" (UID: \"31a0061d-a0a7-4517-84e5-3d01ec405c5b\") " pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:17 crc kubenswrapper[4792]: I1127 17:25:17.170983 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a0061d-a0a7-4517-84e5-3d01ec405c5b-catalog-content\") pod \"redhat-marketplace-jxp55\" (UID: \"31a0061d-a0a7-4517-84e5-3d01ec405c5b\") " pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:17 crc kubenswrapper[4792]: I1127 17:25:17.170988 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a0061d-a0a7-4517-84e5-3d01ec405c5b-utilities\") pod \"redhat-marketplace-jxp55\" (UID: \"31a0061d-a0a7-4517-84e5-3d01ec405c5b\") " pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:17 crc kubenswrapper[4792]: I1127 17:25:17.191033 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rlwk\" (UniqueName: \"kubernetes.io/projected/31a0061d-a0a7-4517-84e5-3d01ec405c5b-kube-api-access-9rlwk\") pod \"redhat-marketplace-jxp55\" (UID: \"31a0061d-a0a7-4517-84e5-3d01ec405c5b\") " pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:17 crc kubenswrapper[4792]: I1127 17:25:17.301344 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:17 crc kubenswrapper[4792]: I1127 17:25:17.737499 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxp55"] Nov 27 17:25:17 crc kubenswrapper[4792]: I1127 17:25:17.896285 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxp55" event={"ID":"31a0061d-a0a7-4517-84e5-3d01ec405c5b","Type":"ContainerStarted","Data":"188ced8c77064ec18035b359ceedf39199712b1af7483176bba655e8e1c43ef1"} Nov 27 17:25:18 crc kubenswrapper[4792]: I1127 17:25:18.924967 4792 generic.go:334] "Generic (PLEG): container finished" podID="31a0061d-a0a7-4517-84e5-3d01ec405c5b" containerID="2b0792dfdcf9f84538628be23ebc44ec0d360bb51d8e91478f778b197841a765" exitCode=0 Nov 27 17:25:18 crc kubenswrapper[4792]: I1127 17:25:18.925017 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxp55" event={"ID":"31a0061d-a0a7-4517-84e5-3d01ec405c5b","Type":"ContainerDied","Data":"2b0792dfdcf9f84538628be23ebc44ec0d360bb51d8e91478f778b197841a765"} Nov 27 17:25:19 crc kubenswrapper[4792]: I1127 17:25:19.935795 4792 generic.go:334] "Generic (PLEG): container finished" podID="31a0061d-a0a7-4517-84e5-3d01ec405c5b" containerID="9fd6946d3d0ca782ffceb55a7000c4631d7f424e2be176bb99425e1b15ac1c9d" exitCode=0 Nov 27 17:25:19 crc kubenswrapper[4792]: I1127 17:25:19.935880 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxp55" event={"ID":"31a0061d-a0a7-4517-84e5-3d01ec405c5b","Type":"ContainerDied","Data":"9fd6946d3d0ca782ffceb55a7000c4631d7f424e2be176bb99425e1b15ac1c9d"} Nov 27 17:25:20 crc kubenswrapper[4792]: I1127 17:25:20.950957 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxp55" event={"ID":"31a0061d-a0a7-4517-84e5-3d01ec405c5b","Type":"ContainerStarted","Data":"a9b23ecc35c2df7218a0e3dd8da4bde0021320ed4b9001c69ea3e863ddf5875d"} Nov 27 17:25:20 crc kubenswrapper[4792]: I1127 17:25:20.967914 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jxp55" podStartSLOduration=3.450534429 podStartE2EDuration="4.967899997s" podCreationTimestamp="2025-11-27 17:25:16 +0000 UTC" firstStartedPulling="2025-11-27 17:25:18.926937241 +0000 UTC m=+941.269763559" lastFinishedPulling="2025-11-27 17:25:20.444302799 +0000 UTC m=+942.787129127" observedRunningTime="2025-11-27 17:25:20.966585045 +0000 UTC m=+943.309411373" watchObservedRunningTime="2025-11-27 17:25:20.967899997 +0000 UTC m=+943.310726315" Nov 27 17:25:27 crc kubenswrapper[4792]: I1127 17:25:27.302013 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:27 crc kubenswrapper[4792]: I1127 17:25:27.306603 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:27 crc kubenswrapper[4792]: I1127 17:25:27.374791 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:28 crc kubenswrapper[4792]: I1127 17:25:28.062296 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:28 crc kubenswrapper[4792]: I1127 17:25:28.104089 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxp55"] Nov 27 17:25:29 crc kubenswrapper[4792]: I1127 17:25:29.963906 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-54b4d94fcb-zr2dd" podUID="31bf1e2e-cbf0-420f-b605-f6ff6064b0cb" containerName="console" containerID="cri-o://3bbfda025554927cbf5845664c839ee0dbe6c1f7c1725e427425f63342ff2f18" gracePeriod=15 Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.034868 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jxp55" podUID="31a0061d-a0a7-4517-84e5-3d01ec405c5b" containerName="registry-server" containerID="cri-o://a9b23ecc35c2df7218a0e3dd8da4bde0021320ed4b9001c69ea3e863ddf5875d" gracePeriod=2 Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.499227 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-54b4d94fcb-zr2dd_31bf1e2e-cbf0-420f-b605-f6ff6064b0cb/console/0.log" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.499779 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.507401 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.621450 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a0061d-a0a7-4517-84e5-3d01ec405c5b-utilities\") pod \"31a0061d-a0a7-4517-84e5-3d01ec405c5b\" (UID: \"31a0061d-a0a7-4517-84e5-3d01ec405c5b\") " Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.622440 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31a0061d-a0a7-4517-84e5-3d01ec405c5b-utilities" (OuterVolumeSpecName: "utilities") pod "31a0061d-a0a7-4517-84e5-3d01ec405c5b" (UID: "31a0061d-a0a7-4517-84e5-3d01ec405c5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.622545 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rlwk\" (UniqueName: \"kubernetes.io/projected/31a0061d-a0a7-4517-84e5-3d01ec405c5b-kube-api-access-9rlwk\") pod \"31a0061d-a0a7-4517-84e5-3d01ec405c5b\" (UID: \"31a0061d-a0a7-4517-84e5-3d01ec405c5b\") " Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.622587 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-service-ca\") pod \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.622635 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55b6z\" (UniqueName: \"kubernetes.io/projected/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-kube-api-access-55b6z\") pod \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.622690 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-trusted-ca-bundle\") pod \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.622724 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-serving-cert\") pod \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.622775 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-oauth-config\") pod \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.622835 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a0061d-a0a7-4517-84e5-3d01ec405c5b-catalog-content\") pod \"31a0061d-a0a7-4517-84e5-3d01ec405c5b\" (UID: \"31a0061d-a0a7-4517-84e5-3d01ec405c5b\") " Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.622934 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-config\") pod \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.622954 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-oauth-serving-cert\") pod \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\" (UID: \"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb\") " Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.623362 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-service-ca" (OuterVolumeSpecName: "service-ca") pod "31bf1e2e-cbf0-420f-b605-f6ff6064b0cb" (UID: "31bf1e2e-cbf0-420f-b605-f6ff6064b0cb"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.623542 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a0061d-a0a7-4517-84e5-3d01ec405c5b-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.623557 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-service-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.623906 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "31bf1e2e-cbf0-420f-b605-f6ff6064b0cb" (UID: "31bf1e2e-cbf0-420f-b605-f6ff6064b0cb"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.624422 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-config" (OuterVolumeSpecName: "console-config") pod "31bf1e2e-cbf0-420f-b605-f6ff6064b0cb" (UID: "31bf1e2e-cbf0-420f-b605-f6ff6064b0cb"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.624736 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "31bf1e2e-cbf0-420f-b605-f6ff6064b0cb" (UID: "31bf1e2e-cbf0-420f-b605-f6ff6064b0cb"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.627812 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "31bf1e2e-cbf0-420f-b605-f6ff6064b0cb" (UID: "31bf1e2e-cbf0-420f-b605-f6ff6064b0cb"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.627922 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "31bf1e2e-cbf0-420f-b605-f6ff6064b0cb" (UID: "31bf1e2e-cbf0-420f-b605-f6ff6064b0cb"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.628451 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-kube-api-access-55b6z" (OuterVolumeSpecName: "kube-api-access-55b6z") pod "31bf1e2e-cbf0-420f-b605-f6ff6064b0cb" (UID: "31bf1e2e-cbf0-420f-b605-f6ff6064b0cb"). InnerVolumeSpecName "kube-api-access-55b6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.628632 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a0061d-a0a7-4517-84e5-3d01ec405c5b-kube-api-access-9rlwk" (OuterVolumeSpecName: "kube-api-access-9rlwk") pod "31a0061d-a0a7-4517-84e5-3d01ec405c5b" (UID: "31a0061d-a0a7-4517-84e5-3d01ec405c5b"). InnerVolumeSpecName "kube-api-access-9rlwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.646876 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31a0061d-a0a7-4517-84e5-3d01ec405c5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31a0061d-a0a7-4517-84e5-3d01ec405c5b" (UID: "31a0061d-a0a7-4517-84e5-3d01ec405c5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.725191 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rlwk\" (UniqueName: \"kubernetes.io/projected/31a0061d-a0a7-4517-84e5-3d01ec405c5b-kube-api-access-9rlwk\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.725244 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55b6z\" (UniqueName: \"kubernetes.io/projected/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-kube-api-access-55b6z\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.725264 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.725280 4792 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.725296 4792 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.725310 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a0061d-a0a7-4517-84e5-3d01ec405c5b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.725327 4792 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-console-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:30 crc kubenswrapper[4792]: I1127 17:25:30.725339 4792 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.043943 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-54b4d94fcb-zr2dd_31bf1e2e-cbf0-420f-b605-f6ff6064b0cb/console/0.log" Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.044006 4792 generic.go:334] "Generic (PLEG): container finished" podID="31bf1e2e-cbf0-420f-b605-f6ff6064b0cb" containerID="3bbfda025554927cbf5845664c839ee0dbe6c1f7c1725e427425f63342ff2f18" exitCode=2 Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.044076 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54b4d94fcb-zr2dd" event={"ID":"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb","Type":"ContainerDied","Data":"3bbfda025554927cbf5845664c839ee0dbe6c1f7c1725e427425f63342ff2f18"} Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.044107 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-54b4d94fcb-zr2dd" event={"ID":"31bf1e2e-cbf0-420f-b605-f6ff6064b0cb","Type":"ContainerDied","Data":"664954eaa21211e4781a327c5eadd3d68a41b314f720e945a90d4367bdfcf6e8"} Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.044148 4792 scope.go:117] "RemoveContainer" containerID="3bbfda025554927cbf5845664c839ee0dbe6c1f7c1725e427425f63342ff2f18" Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.044163 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-54b4d94fcb-zr2dd" Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.049339 4792 generic.go:334] "Generic (PLEG): container finished" podID="31a0061d-a0a7-4517-84e5-3d01ec405c5b" containerID="a9b23ecc35c2df7218a0e3dd8da4bde0021320ed4b9001c69ea3e863ddf5875d" exitCode=0 Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.049392 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxp55" event={"ID":"31a0061d-a0a7-4517-84e5-3d01ec405c5b","Type":"ContainerDied","Data":"a9b23ecc35c2df7218a0e3dd8da4bde0021320ed4b9001c69ea3e863ddf5875d"} Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.049424 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxp55" event={"ID":"31a0061d-a0a7-4517-84e5-3d01ec405c5b","Type":"ContainerDied","Data":"188ced8c77064ec18035b359ceedf39199712b1af7483176bba655e8e1c43ef1"} Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.049448 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxp55" Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.065687 4792 scope.go:117] "RemoveContainer" containerID="3bbfda025554927cbf5845664c839ee0dbe6c1f7c1725e427425f63342ff2f18" Nov 27 17:25:31 crc kubenswrapper[4792]: E1127 17:25:31.067826 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bbfda025554927cbf5845664c839ee0dbe6c1f7c1725e427425f63342ff2f18\": container with ID starting with 3bbfda025554927cbf5845664c839ee0dbe6c1f7c1725e427425f63342ff2f18 not found: ID does not exist" containerID="3bbfda025554927cbf5845664c839ee0dbe6c1f7c1725e427425f63342ff2f18" Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.067877 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bbfda025554927cbf5845664c839ee0dbe6c1f7c1725e427425f63342ff2f18"} err="failed to get container status \"3bbfda025554927cbf5845664c839ee0dbe6c1f7c1725e427425f63342ff2f18\": rpc error: code = NotFound desc = could not find container \"3bbfda025554927cbf5845664c839ee0dbe6c1f7c1725e427425f63342ff2f18\": container with ID starting with 3bbfda025554927cbf5845664c839ee0dbe6c1f7c1725e427425f63342ff2f18 not found: ID does not exist" Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.067920 4792 scope.go:117] "RemoveContainer" containerID="a9b23ecc35c2df7218a0e3dd8da4bde0021320ed4b9001c69ea3e863ddf5875d" Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.085068 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-54b4d94fcb-zr2dd"] Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.086776 4792 scope.go:117] "RemoveContainer" containerID="9fd6946d3d0ca782ffceb55a7000c4631d7f424e2be176bb99425e1b15ac1c9d" Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.091833 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-54b4d94fcb-zr2dd"] Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.101911 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxp55"] Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.107899 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxp55"] Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.142601 4792 scope.go:117] "RemoveContainer" containerID="2b0792dfdcf9f84538628be23ebc44ec0d360bb51d8e91478f778b197841a765" Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.173516 4792 scope.go:117] "RemoveContainer" containerID="a9b23ecc35c2df7218a0e3dd8da4bde0021320ed4b9001c69ea3e863ddf5875d" Nov 27 17:25:31 crc kubenswrapper[4792]: E1127 17:25:31.174220 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9b23ecc35c2df7218a0e3dd8da4bde0021320ed4b9001c69ea3e863ddf5875d\": container with ID starting with a9b23ecc35c2df7218a0e3dd8da4bde0021320ed4b9001c69ea3e863ddf5875d not found: ID does not exist" containerID="a9b23ecc35c2df7218a0e3dd8da4bde0021320ed4b9001c69ea3e863ddf5875d" Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.174259 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9b23ecc35c2df7218a0e3dd8da4bde0021320ed4b9001c69ea3e863ddf5875d"} err="failed to get container status \"a9b23ecc35c2df7218a0e3dd8da4bde0021320ed4b9001c69ea3e863ddf5875d\": rpc error: code = NotFound desc = could not find container \"a9b23ecc35c2df7218a0e3dd8da4bde0021320ed4b9001c69ea3e863ddf5875d\": container with ID starting with a9b23ecc35c2df7218a0e3dd8da4bde0021320ed4b9001c69ea3e863ddf5875d not found: ID does not exist" Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.174286 4792 scope.go:117] "RemoveContainer" containerID="9fd6946d3d0ca782ffceb55a7000c4631d7f424e2be176bb99425e1b15ac1c9d" Nov 27 17:25:31 crc kubenswrapper[4792]: E1127 17:25:31.174831 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fd6946d3d0ca782ffceb55a7000c4631d7f424e2be176bb99425e1b15ac1c9d\": container with ID starting with 9fd6946d3d0ca782ffceb55a7000c4631d7f424e2be176bb99425e1b15ac1c9d not found: ID does not exist" containerID="9fd6946d3d0ca782ffceb55a7000c4631d7f424e2be176bb99425e1b15ac1c9d" Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.174869 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd6946d3d0ca782ffceb55a7000c4631d7f424e2be176bb99425e1b15ac1c9d"} err="failed to get container status \"9fd6946d3d0ca782ffceb55a7000c4631d7f424e2be176bb99425e1b15ac1c9d\": rpc error: code = NotFound desc = could not find container \"9fd6946d3d0ca782ffceb55a7000c4631d7f424e2be176bb99425e1b15ac1c9d\": container with ID starting with 9fd6946d3d0ca782ffceb55a7000c4631d7f424e2be176bb99425e1b15ac1c9d not found: ID does not exist" Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.174889 4792 scope.go:117] "RemoveContainer" containerID="2b0792dfdcf9f84538628be23ebc44ec0d360bb51d8e91478f778b197841a765" Nov 27 17:25:31 crc kubenswrapper[4792]: E1127 17:25:31.175141 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b0792dfdcf9f84538628be23ebc44ec0d360bb51d8e91478f778b197841a765\": container with ID starting with 2b0792dfdcf9f84538628be23ebc44ec0d360bb51d8e91478f778b197841a765 not found: ID does not exist" containerID="2b0792dfdcf9f84538628be23ebc44ec0d360bb51d8e91478f778b197841a765" Nov 27 17:25:31 crc kubenswrapper[4792]: I1127 17:25:31.175167 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b0792dfdcf9f84538628be23ebc44ec0d360bb51d8e91478f778b197841a765"} err="failed to get container status \"2b0792dfdcf9f84538628be23ebc44ec0d360bb51d8e91478f778b197841a765\": rpc error: code = NotFound desc = could not find container \"2b0792dfdcf9f84538628be23ebc44ec0d360bb51d8e91478f778b197841a765\": container with ID starting with 2b0792dfdcf9f84538628be23ebc44ec0d360bb51d8e91478f778b197841a765 not found: ID does not exist" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.489001 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l"] Nov 27 17:25:32 crc kubenswrapper[4792]: E1127 17:25:32.489539 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31bf1e2e-cbf0-420f-b605-f6ff6064b0cb" containerName="console" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.489552 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="31bf1e2e-cbf0-420f-b605-f6ff6064b0cb" containerName="console" Nov 27 17:25:32 crc kubenswrapper[4792]: E1127 17:25:32.489559 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a0061d-a0a7-4517-84e5-3d01ec405c5b" containerName="extract-content" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.489566 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a0061d-a0a7-4517-84e5-3d01ec405c5b" containerName="extract-content" Nov 27 17:25:32 crc kubenswrapper[4792]: E1127 17:25:32.489582 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a0061d-a0a7-4517-84e5-3d01ec405c5b" containerName="extract-utilities" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.489588 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a0061d-a0a7-4517-84e5-3d01ec405c5b" containerName="extract-utilities" Nov 27 17:25:32 crc kubenswrapper[4792]: E1127 17:25:32.489602 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a0061d-a0a7-4517-84e5-3d01ec405c5b" containerName="registry-server" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.489609 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a0061d-a0a7-4517-84e5-3d01ec405c5b" containerName="registry-server" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.489742 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="31bf1e2e-cbf0-420f-b605-f6ff6064b0cb" containerName="console" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.489763 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a0061d-a0a7-4517-84e5-3d01ec405c5b" containerName="registry-server" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.490743 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.495834 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.513615 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l"] Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.563009 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e69faa79-91a5-4146-a048-598f6a9be342-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l\" (UID: \"e69faa79-91a5-4146-a048-598f6a9be342\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.563295 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e69faa79-91a5-4146-a048-598f6a9be342-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l\" (UID: \"e69faa79-91a5-4146-a048-598f6a9be342\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.563415 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzmw8\" (UniqueName: \"kubernetes.io/projected/e69faa79-91a5-4146-a048-598f6a9be342-kube-api-access-fzmw8\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l\" (UID: \"e69faa79-91a5-4146-a048-598f6a9be342\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.665247 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e69faa79-91a5-4146-a048-598f6a9be342-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l\" (UID: \"e69faa79-91a5-4146-a048-598f6a9be342\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.665381 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzmw8\" (UniqueName: \"kubernetes.io/projected/e69faa79-91a5-4146-a048-598f6a9be342-kube-api-access-fzmw8\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l\" (UID: \"e69faa79-91a5-4146-a048-598f6a9be342\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.665457 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e69faa79-91a5-4146-a048-598f6a9be342-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l\" (UID: \"e69faa79-91a5-4146-a048-598f6a9be342\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.665829 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e69faa79-91a5-4146-a048-598f6a9be342-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l\" (UID: \"e69faa79-91a5-4146-a048-598f6a9be342\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.665944 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e69faa79-91a5-4146-a048-598f6a9be342-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l\" (UID: \"e69faa79-91a5-4146-a048-598f6a9be342\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.701529 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzmw8\" (UniqueName: \"kubernetes.io/projected/e69faa79-91a5-4146-a048-598f6a9be342-kube-api-access-fzmw8\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l\" (UID: \"e69faa79-91a5-4146-a048-598f6a9be342\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.711270 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31a0061d-a0a7-4517-84e5-3d01ec405c5b" path="/var/lib/kubelet/pods/31a0061d-a0a7-4517-84e5-3d01ec405c5b/volumes" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.712193 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31bf1e2e-cbf0-420f-b605-f6ff6064b0cb" path="/var/lib/kubelet/pods/31bf1e2e-cbf0-420f-b605-f6ff6064b0cb/volumes" Nov 27 17:25:32 crc kubenswrapper[4792]: I1127 17:25:32.805124 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" Nov 27 17:25:33 crc kubenswrapper[4792]: I1127 17:25:33.007547 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l"] Nov 27 17:25:33 crc kubenswrapper[4792]: I1127 17:25:33.065123 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" event={"ID":"e69faa79-91a5-4146-a048-598f6a9be342","Type":"ContainerStarted","Data":"0f6adc55220092415b57c2c7f83a08a73a16cfb91b391a8d376e01c6f15d65b5"} Nov 27 17:25:36 crc kubenswrapper[4792]: I1127 17:25:36.092131 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" event={"ID":"e69faa79-91a5-4146-a048-598f6a9be342","Type":"ContainerStarted","Data":"83db8a5823270981b14f6fb405bc0d2583534109ef83efa6789ce079c805d1a1"} Nov 27 17:25:38 crc kubenswrapper[4792]: I1127 17:25:38.295615 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:25:38 crc kubenswrapper[4792]: I1127 17:25:38.295972 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:25:38 crc kubenswrapper[4792]: I1127 17:25:38.296017 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:25:38 crc kubenswrapper[4792]: I1127 17:25:38.296684 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb4dfc187c50b610e23a24ccd114a91f4e733187652bfdcb858d9943f47d0623"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 17:25:38 crc kubenswrapper[4792]: I1127 17:25:38.296738 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://eb4dfc187c50b610e23a24ccd114a91f4e733187652bfdcb858d9943f47d0623" gracePeriod=600 Nov 27 17:25:38 crc kubenswrapper[4792]: I1127 17:25:38.301491 4792 generic.go:334] "Generic (PLEG): container finished" podID="e69faa79-91a5-4146-a048-598f6a9be342" containerID="83db8a5823270981b14f6fb405bc0d2583534109ef83efa6789ce079c805d1a1" exitCode=0 Nov 27 17:25:38 crc kubenswrapper[4792]: I1127 17:25:38.301534 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" event={"ID":"e69faa79-91a5-4146-a048-598f6a9be342","Type":"ContainerDied","Data":"83db8a5823270981b14f6fb405bc0d2583534109ef83efa6789ce079c805d1a1"} Nov 27 17:25:39 crc kubenswrapper[4792]: I1127 17:25:39.310139 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="eb4dfc187c50b610e23a24ccd114a91f4e733187652bfdcb858d9943f47d0623" exitCode=0 Nov 27 17:25:39 crc kubenswrapper[4792]: I1127 17:25:39.310598 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"eb4dfc187c50b610e23a24ccd114a91f4e733187652bfdcb858d9943f47d0623"} Nov 27 17:25:39 crc kubenswrapper[4792]: I1127 17:25:39.310623 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"d60cedcb892e88638661f9a31eeedcc56ec861fc1db68b55e1cd3c8c8a97edef"} Nov 27 17:25:39 crc kubenswrapper[4792]: I1127 17:25:39.310639 4792 scope.go:117] "RemoveContainer" containerID="9f01bf94bd55fb4aa5577fea4f28f3b654e0b34834b1e5c5ebc907510f5b8133" Nov 27 17:25:40 crc kubenswrapper[4792]: I1127 17:25:40.320896 4792 generic.go:334] "Generic (PLEG): container finished" podID="e69faa79-91a5-4146-a048-598f6a9be342" containerID="91d2577c2853469039afc8e62a7fa5a82067c0d96fdf0bb7025ec11e9afcd5a5" exitCode=0 Nov 27 17:25:40 crc kubenswrapper[4792]: I1127 17:25:40.320945 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" event={"ID":"e69faa79-91a5-4146-a048-598f6a9be342","Type":"ContainerDied","Data":"91d2577c2853469039afc8e62a7fa5a82067c0d96fdf0bb7025ec11e9afcd5a5"} Nov 27 17:25:41 crc kubenswrapper[4792]: I1127 17:25:41.340530 4792 generic.go:334] "Generic (PLEG): container finished" podID="e69faa79-91a5-4146-a048-598f6a9be342" containerID="0156cf6ff9d83c297a3297af8950ce1e7b6d58a6d2ae896ad3f4b2d5b5a4ec68" exitCode=0 Nov 27 17:25:41 crc kubenswrapper[4792]: I1127 17:25:41.340598 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" event={"ID":"e69faa79-91a5-4146-a048-598f6a9be342","Type":"ContainerDied","Data":"0156cf6ff9d83c297a3297af8950ce1e7b6d58a6d2ae896ad3f4b2d5b5a4ec68"} Nov 27 17:25:42 crc kubenswrapper[4792]: I1127 17:25:42.688408 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" Nov 27 17:25:42 crc kubenswrapper[4792]: I1127 17:25:42.728737 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e69faa79-91a5-4146-a048-598f6a9be342-bundle\") pod \"e69faa79-91a5-4146-a048-598f6a9be342\" (UID: \"e69faa79-91a5-4146-a048-598f6a9be342\") " Nov 27 17:25:42 crc kubenswrapper[4792]: I1127 17:25:42.728835 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzmw8\" (UniqueName: \"kubernetes.io/projected/e69faa79-91a5-4146-a048-598f6a9be342-kube-api-access-fzmw8\") pod \"e69faa79-91a5-4146-a048-598f6a9be342\" (UID: \"e69faa79-91a5-4146-a048-598f6a9be342\") " Nov 27 17:25:42 crc kubenswrapper[4792]: I1127 17:25:42.728876 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e69faa79-91a5-4146-a048-598f6a9be342-util\") pod \"e69faa79-91a5-4146-a048-598f6a9be342\" (UID: \"e69faa79-91a5-4146-a048-598f6a9be342\") " Nov 27 17:25:42 crc kubenswrapper[4792]: I1127 17:25:42.730249 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e69faa79-91a5-4146-a048-598f6a9be342-bundle" (OuterVolumeSpecName: "bundle") pod "e69faa79-91a5-4146-a048-598f6a9be342" (UID: "e69faa79-91a5-4146-a048-598f6a9be342"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:25:42 crc kubenswrapper[4792]: I1127 17:25:42.733839 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e69faa79-91a5-4146-a048-598f6a9be342-kube-api-access-fzmw8" (OuterVolumeSpecName: "kube-api-access-fzmw8") pod "e69faa79-91a5-4146-a048-598f6a9be342" (UID: "e69faa79-91a5-4146-a048-598f6a9be342"). InnerVolumeSpecName "kube-api-access-fzmw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:25:42 crc kubenswrapper[4792]: I1127 17:25:42.745356 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e69faa79-91a5-4146-a048-598f6a9be342-util" (OuterVolumeSpecName: "util") pod "e69faa79-91a5-4146-a048-598f6a9be342" (UID: "e69faa79-91a5-4146-a048-598f6a9be342"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:25:42 crc kubenswrapper[4792]: I1127 17:25:42.830038 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e69faa79-91a5-4146-a048-598f6a9be342-util\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:42 crc kubenswrapper[4792]: I1127 17:25:42.830067 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e69faa79-91a5-4146-a048-598f6a9be342-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:42 crc kubenswrapper[4792]: I1127 17:25:42.830076 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzmw8\" (UniqueName: \"kubernetes.io/projected/e69faa79-91a5-4146-a048-598f6a9be342-kube-api-access-fzmw8\") on node \"crc\" DevicePath \"\"" Nov 27 17:25:43 crc kubenswrapper[4792]: I1127 17:25:43.356776 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" event={"ID":"e69faa79-91a5-4146-a048-598f6a9be342","Type":"ContainerDied","Data":"0f6adc55220092415b57c2c7f83a08a73a16cfb91b391a8d376e01c6f15d65b5"} Nov 27 17:25:43 crc kubenswrapper[4792]: I1127 17:25:43.357088 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f6adc55220092415b57c2c7f83a08a73a16cfb91b391a8d376e01c6f15d65b5" Nov 27 17:25:43 crc kubenswrapper[4792]: I1127 17:25:43.356826 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l" Nov 27 17:25:51 crc kubenswrapper[4792]: I1127 17:25:51.952803 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rg2ph"] Nov 27 17:25:51 crc kubenswrapper[4792]: E1127 17:25:51.954857 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e69faa79-91a5-4146-a048-598f6a9be342" containerName="util" Nov 27 17:25:51 crc kubenswrapper[4792]: I1127 17:25:51.954959 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e69faa79-91a5-4146-a048-598f6a9be342" containerName="util" Nov 27 17:25:51 crc kubenswrapper[4792]: E1127 17:25:51.955065 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e69faa79-91a5-4146-a048-598f6a9be342" containerName="pull" Nov 27 17:25:51 crc kubenswrapper[4792]: I1127 17:25:51.955148 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e69faa79-91a5-4146-a048-598f6a9be342" containerName="pull" Nov 27 17:25:51 crc kubenswrapper[4792]: E1127 17:25:51.955226 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e69faa79-91a5-4146-a048-598f6a9be342" containerName="extract" Nov 27 17:25:51 crc kubenswrapper[4792]: I1127 17:25:51.955292 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e69faa79-91a5-4146-a048-598f6a9be342" containerName="extract" Nov 27 17:25:51 crc kubenswrapper[4792]: I1127 17:25:51.955554 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e69faa79-91a5-4146-a048-598f6a9be342" containerName="extract" Nov 27 17:25:51 crc kubenswrapper[4792]: I1127 17:25:51.956919 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:25:51 crc kubenswrapper[4792]: I1127 17:25:51.969747 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rg2ph"] Nov 27 17:25:51 crc kubenswrapper[4792]: I1127 17:25:51.988181 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gsl9\" (UniqueName: \"kubernetes.io/projected/07170faf-4e47-4af3-a917-9695594928fc-kube-api-access-6gsl9\") pod \"certified-operators-rg2ph\" (UID: \"07170faf-4e47-4af3-a917-9695594928fc\") " pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:25:51 crc kubenswrapper[4792]: I1127 17:25:51.988276 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07170faf-4e47-4af3-a917-9695594928fc-catalog-content\") pod \"certified-operators-rg2ph\" (UID: \"07170faf-4e47-4af3-a917-9695594928fc\") " pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:25:51 crc kubenswrapper[4792]: I1127 17:25:51.988337 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07170faf-4e47-4af3-a917-9695594928fc-utilities\") pod \"certified-operators-rg2ph\" (UID: \"07170faf-4e47-4af3-a917-9695594928fc\") " pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:25:52 crc kubenswrapper[4792]: I1127 17:25:52.089585 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gsl9\" (UniqueName: \"kubernetes.io/projected/07170faf-4e47-4af3-a917-9695594928fc-kube-api-access-6gsl9\") pod \"certified-operators-rg2ph\" (UID: \"07170faf-4e47-4af3-a917-9695594928fc\") " pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:25:52 crc kubenswrapper[4792]: I1127 17:25:52.090263 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07170faf-4e47-4af3-a917-9695594928fc-catalog-content\") pod \"certified-operators-rg2ph\" (UID: \"07170faf-4e47-4af3-a917-9695594928fc\") " pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:25:52 crc kubenswrapper[4792]: I1127 17:25:52.090409 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07170faf-4e47-4af3-a917-9695594928fc-utilities\") pod \"certified-operators-rg2ph\" (UID: \"07170faf-4e47-4af3-a917-9695594928fc\") " pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:25:52 crc kubenswrapper[4792]: I1127 17:25:52.090789 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07170faf-4e47-4af3-a917-9695594928fc-catalog-content\") pod \"certified-operators-rg2ph\" (UID: \"07170faf-4e47-4af3-a917-9695594928fc\") " pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:25:52 crc kubenswrapper[4792]: I1127 17:25:52.090815 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07170faf-4e47-4af3-a917-9695594928fc-utilities\") pod \"certified-operators-rg2ph\" (UID: \"07170faf-4e47-4af3-a917-9695594928fc\") " pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:25:52 crc kubenswrapper[4792]: I1127 17:25:52.109000 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gsl9\" (UniqueName: \"kubernetes.io/projected/07170faf-4e47-4af3-a917-9695594928fc-kube-api-access-6gsl9\") pod \"certified-operators-rg2ph\" (UID: \"07170faf-4e47-4af3-a917-9695594928fc\") " pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:25:52 crc kubenswrapper[4792]: I1127 17:25:52.294143 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:25:52 crc kubenswrapper[4792]: I1127 17:25:52.766000 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rg2ph"] Nov 27 17:25:53 crc kubenswrapper[4792]: I1127 17:25:53.464432 4792 generic.go:334] "Generic (PLEG): container finished" podID="07170faf-4e47-4af3-a917-9695594928fc" containerID="6a92427c3cf162a313cc248ac186f2e69136ab3b4cba415900bec2bd9b2a0d18" exitCode=0 Nov 27 17:25:53 crc kubenswrapper[4792]: I1127 17:25:53.464475 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg2ph" event={"ID":"07170faf-4e47-4af3-a917-9695594928fc","Type":"ContainerDied","Data":"6a92427c3cf162a313cc248ac186f2e69136ab3b4cba415900bec2bd9b2a0d18"} Nov 27 17:25:53 crc kubenswrapper[4792]: I1127 17:25:53.464503 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg2ph" event={"ID":"07170faf-4e47-4af3-a917-9695594928fc","Type":"ContainerStarted","Data":"4a95d9d2c921b2f3b50e1bb54c27f56c6bb563b42bb1fff70020d54ae50efbb5"} Nov 27 17:25:55 crc kubenswrapper[4792]: I1127 17:25:55.477850 4792 generic.go:334] "Generic (PLEG): container finished" podID="07170faf-4e47-4af3-a917-9695594928fc" containerID="efeada761340ff36c5c9a4af3fda80e195be93a772057dafa19c3b4f1c8a290d" exitCode=0 Nov 27 17:25:55 crc kubenswrapper[4792]: I1127 17:25:55.477915 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg2ph" event={"ID":"07170faf-4e47-4af3-a917-9695594928fc","Type":"ContainerDied","Data":"efeada761340ff36c5c9a4af3fda80e195be93a772057dafa19c3b4f1c8a290d"} Nov 27 17:25:56 crc kubenswrapper[4792]: I1127 17:25:56.503806 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg2ph" event={"ID":"07170faf-4e47-4af3-a917-9695594928fc","Type":"ContainerStarted","Data":"43685146be033002c049c7641061e9db853328e29fd44dc53c1077f325a6e951"} Nov 27 17:25:56 crc kubenswrapper[4792]: I1127 17:25:56.561172 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rg2ph" podStartSLOduration=3.106923693 podStartE2EDuration="5.561149362s" podCreationTimestamp="2025-11-27 17:25:51 +0000 UTC" firstStartedPulling="2025-11-27 17:25:53.466326571 +0000 UTC m=+975.809152889" lastFinishedPulling="2025-11-27 17:25:55.92055225 +0000 UTC m=+978.263378558" observedRunningTime="2025-11-27 17:25:56.556943808 +0000 UTC m=+978.899770126" watchObservedRunningTime="2025-11-27 17:25:56.561149362 +0000 UTC m=+978.903975680" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.069415 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq"] Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.070811 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.073018 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.073042 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.073227 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-mplqd" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.074270 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.075371 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.094790 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq"] Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.183097 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4de98138-86e1-4a92-84ff-4ef1a2a1d57b-webhook-cert\") pod \"metallb-operator-controller-manager-fbff999dd-d7fwq\" (UID: \"4de98138-86e1-4a92-84ff-4ef1a2a1d57b\") " pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.183186 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwght\" (UniqueName: \"kubernetes.io/projected/4de98138-86e1-4a92-84ff-4ef1a2a1d57b-kube-api-access-lwght\") pod \"metallb-operator-controller-manager-fbff999dd-d7fwq\" (UID: \"4de98138-86e1-4a92-84ff-4ef1a2a1d57b\") " pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.183274 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4de98138-86e1-4a92-84ff-4ef1a2a1d57b-apiservice-cert\") pod \"metallb-operator-controller-manager-fbff999dd-d7fwq\" (UID: \"4de98138-86e1-4a92-84ff-4ef1a2a1d57b\") " pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.285244 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4de98138-86e1-4a92-84ff-4ef1a2a1d57b-apiservice-cert\") pod \"metallb-operator-controller-manager-fbff999dd-d7fwq\" (UID: \"4de98138-86e1-4a92-84ff-4ef1a2a1d57b\") " pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.285306 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4de98138-86e1-4a92-84ff-4ef1a2a1d57b-webhook-cert\") pod \"metallb-operator-controller-manager-fbff999dd-d7fwq\" (UID: \"4de98138-86e1-4a92-84ff-4ef1a2a1d57b\") " pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.285355 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwght\" (UniqueName: \"kubernetes.io/projected/4de98138-86e1-4a92-84ff-4ef1a2a1d57b-kube-api-access-lwght\") pod \"metallb-operator-controller-manager-fbff999dd-d7fwq\" (UID: \"4de98138-86e1-4a92-84ff-4ef1a2a1d57b\") " pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.291499 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4de98138-86e1-4a92-84ff-4ef1a2a1d57b-apiservice-cert\") pod \"metallb-operator-controller-manager-fbff999dd-d7fwq\" (UID: \"4de98138-86e1-4a92-84ff-4ef1a2a1d57b\") " pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.293251 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4de98138-86e1-4a92-84ff-4ef1a2a1d57b-webhook-cert\") pod \"metallb-operator-controller-manager-fbff999dd-d7fwq\" (UID: \"4de98138-86e1-4a92-84ff-4ef1a2a1d57b\") " pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.318575 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwght\" (UniqueName: \"kubernetes.io/projected/4de98138-86e1-4a92-84ff-4ef1a2a1d57b-kube-api-access-lwght\") pod \"metallb-operator-controller-manager-fbff999dd-d7fwq\" (UID: \"4de98138-86e1-4a92-84ff-4ef1a2a1d57b\") " pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.387323 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.700772 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh"] Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.702427 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.707184 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.707270 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-prgw4" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.707451 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.729162 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh"] Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.794544 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/25e66971-1039-45a3-9010-17efb7f2dbf6-apiservice-cert\") pod \"metallb-operator-webhook-server-7844df848f-mmmmh\" (UID: \"25e66971-1039-45a3-9010-17efb7f2dbf6\") " pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.794724 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgm7w\" (UniqueName: \"kubernetes.io/projected/25e66971-1039-45a3-9010-17efb7f2dbf6-kube-api-access-xgm7w\") pod \"metallb-operator-webhook-server-7844df848f-mmmmh\" (UID: \"25e66971-1039-45a3-9010-17efb7f2dbf6\") " pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.794757 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/25e66971-1039-45a3-9010-17efb7f2dbf6-webhook-cert\") pod \"metallb-operator-webhook-server-7844df848f-mmmmh\" (UID: \"25e66971-1039-45a3-9010-17efb7f2dbf6\") " pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.865532 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq"] Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.896714 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgm7w\" (UniqueName: \"kubernetes.io/projected/25e66971-1039-45a3-9010-17efb7f2dbf6-kube-api-access-xgm7w\") pod \"metallb-operator-webhook-server-7844df848f-mmmmh\" (UID: \"25e66971-1039-45a3-9010-17efb7f2dbf6\") " pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.896948 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/25e66971-1039-45a3-9010-17efb7f2dbf6-webhook-cert\") pod \"metallb-operator-webhook-server-7844df848f-mmmmh\" (UID: \"25e66971-1039-45a3-9010-17efb7f2dbf6\") " pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.897107 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/25e66971-1039-45a3-9010-17efb7f2dbf6-apiservice-cert\") pod \"metallb-operator-webhook-server-7844df848f-mmmmh\" (UID: \"25e66971-1039-45a3-9010-17efb7f2dbf6\") " pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.902244 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/25e66971-1039-45a3-9010-17efb7f2dbf6-apiservice-cert\") pod \"metallb-operator-webhook-server-7844df848f-mmmmh\" (UID: \"25e66971-1039-45a3-9010-17efb7f2dbf6\") " pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.903215 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/25e66971-1039-45a3-9010-17efb7f2dbf6-webhook-cert\") pod \"metallb-operator-webhook-server-7844df848f-mmmmh\" (UID: \"25e66971-1039-45a3-9010-17efb7f2dbf6\") " pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" Nov 27 17:25:57 crc kubenswrapper[4792]: I1127 17:25:57.918364 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgm7w\" (UniqueName: \"kubernetes.io/projected/25e66971-1039-45a3-9010-17efb7f2dbf6-kube-api-access-xgm7w\") pod \"metallb-operator-webhook-server-7844df848f-mmmmh\" (UID: \"25e66971-1039-45a3-9010-17efb7f2dbf6\") " pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" Nov 27 17:25:58 crc kubenswrapper[4792]: I1127 17:25:58.028143 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" Nov 27 17:25:58 crc kubenswrapper[4792]: I1127 17:25:58.450213 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh"] Nov 27 17:25:58 crc kubenswrapper[4792]: I1127 17:25:58.521222 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" event={"ID":"4de98138-86e1-4a92-84ff-4ef1a2a1d57b","Type":"ContainerStarted","Data":"7cce49b57b34f6202d76a8e16b83fecd0584e5e94e5a1f9fef295e6bb5ec9b7c"} Nov 27 17:25:58 crc kubenswrapper[4792]: I1127 17:25:58.522455 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" event={"ID":"25e66971-1039-45a3-9010-17efb7f2dbf6","Type":"ContainerStarted","Data":"adbfbdbc0c9be966f4bc707ab7e6814b9a15c68a125d86f7e9833d2195d2fa10"} Nov 27 17:26:02 crc kubenswrapper[4792]: I1127 17:26:02.294825 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:26:02 crc kubenswrapper[4792]: I1127 17:26:02.300847 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:26:02 crc kubenswrapper[4792]: I1127 17:26:02.363104 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:26:02 crc kubenswrapper[4792]: I1127 17:26:02.562522 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" event={"ID":"4de98138-86e1-4a92-84ff-4ef1a2a1d57b","Type":"ContainerStarted","Data":"8cf25ab61a02f9410591a44f103c259a3828707b1559ba4a3c011ddd19192031"} Nov 27 17:26:02 crc kubenswrapper[4792]: I1127 17:26:02.612273 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" podStartSLOduration=1.918631382 podStartE2EDuration="5.612250468s" podCreationTimestamp="2025-11-27 17:25:57 +0000 UTC" firstStartedPulling="2025-11-27 17:25:57.883319652 +0000 UTC m=+980.226145980" lastFinishedPulling="2025-11-27 17:26:01.576938748 +0000 UTC m=+983.919765066" observedRunningTime="2025-11-27 17:26:02.601258508 +0000 UTC m=+984.944084826" watchObservedRunningTime="2025-11-27 17:26:02.612250468 +0000 UTC m=+984.955076796" Nov 27 17:26:02 crc kubenswrapper[4792]: I1127 17:26:02.627854 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:26:02 crc kubenswrapper[4792]: I1127 17:26:02.679311 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rg2ph"] Nov 27 17:26:03 crc kubenswrapper[4792]: I1127 17:26:03.572845 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" Nov 27 17:26:04 crc kubenswrapper[4792]: I1127 17:26:04.579472 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" event={"ID":"25e66971-1039-45a3-9010-17efb7f2dbf6","Type":"ContainerStarted","Data":"d89ba761a4de89e3a9e8c72eea7a98420193031bac112be28a72017f5cb0b871"} Nov 27 17:26:04 crc kubenswrapper[4792]: I1127 17:26:04.579751 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rg2ph" podUID="07170faf-4e47-4af3-a917-9695594928fc" containerName="registry-server" containerID="cri-o://43685146be033002c049c7641061e9db853328e29fd44dc53c1077f325a6e951" gracePeriod=2 Nov 27 17:26:04 crc kubenswrapper[4792]: I1127 17:26:04.607186 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" podStartSLOduration=2.090717202 podStartE2EDuration="7.607161661s" podCreationTimestamp="2025-11-27 17:25:57 +0000 UTC" firstStartedPulling="2025-11-27 17:25:58.460210728 +0000 UTC m=+980.803037046" lastFinishedPulling="2025-11-27 17:26:03.976655187 +0000 UTC m=+986.319481505" observedRunningTime="2025-11-27 17:26:04.604115156 +0000 UTC m=+986.946941474" watchObservedRunningTime="2025-11-27 17:26:04.607161661 +0000 UTC m=+986.949987979" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.201015 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.245413 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gsl9\" (UniqueName: \"kubernetes.io/projected/07170faf-4e47-4af3-a917-9695594928fc-kube-api-access-6gsl9\") pod \"07170faf-4e47-4af3-a917-9695594928fc\" (UID: \"07170faf-4e47-4af3-a917-9695594928fc\") " Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.245506 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07170faf-4e47-4af3-a917-9695594928fc-catalog-content\") pod \"07170faf-4e47-4af3-a917-9695594928fc\" (UID: \"07170faf-4e47-4af3-a917-9695594928fc\") " Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.245550 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07170faf-4e47-4af3-a917-9695594928fc-utilities\") pod \"07170faf-4e47-4af3-a917-9695594928fc\" (UID: \"07170faf-4e47-4af3-a917-9695594928fc\") " Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.246728 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07170faf-4e47-4af3-a917-9695594928fc-utilities" (OuterVolumeSpecName: "utilities") pod "07170faf-4e47-4af3-a917-9695594928fc" (UID: "07170faf-4e47-4af3-a917-9695594928fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.253078 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07170faf-4e47-4af3-a917-9695594928fc-kube-api-access-6gsl9" (OuterVolumeSpecName: "kube-api-access-6gsl9") pod "07170faf-4e47-4af3-a917-9695594928fc" (UID: "07170faf-4e47-4af3-a917-9695594928fc"). InnerVolumeSpecName "kube-api-access-6gsl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.347554 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07170faf-4e47-4af3-a917-9695594928fc-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.347591 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gsl9\" (UniqueName: \"kubernetes.io/projected/07170faf-4e47-4af3-a917-9695594928fc-kube-api-access-6gsl9\") on node \"crc\" DevicePath \"\"" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.596621 4792 generic.go:334] "Generic (PLEG): container finished" podID="07170faf-4e47-4af3-a917-9695594928fc" containerID="43685146be033002c049c7641061e9db853328e29fd44dc53c1077f325a6e951" exitCode=0 Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.596693 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg2ph" event={"ID":"07170faf-4e47-4af3-a917-9695594928fc","Type":"ContainerDied","Data":"43685146be033002c049c7641061e9db853328e29fd44dc53c1077f325a6e951"} Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.596729 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rg2ph" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.596758 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rg2ph" event={"ID":"07170faf-4e47-4af3-a917-9695594928fc","Type":"ContainerDied","Data":"4a95d9d2c921b2f3b50e1bb54c27f56c6bb563b42bb1fff70020d54ae50efbb5"} Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.596782 4792 scope.go:117] "RemoveContainer" containerID="43685146be033002c049c7641061e9db853328e29fd44dc53c1077f325a6e951" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.597168 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.621596 4792 scope.go:117] "RemoveContainer" containerID="efeada761340ff36c5c9a4af3fda80e195be93a772057dafa19c3b4f1c8a290d" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.643999 4792 scope.go:117] "RemoveContainer" containerID="6a92427c3cf162a313cc248ac186f2e69136ab3b4cba415900bec2bd9b2a0d18" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.681691 4792 scope.go:117] "RemoveContainer" containerID="43685146be033002c049c7641061e9db853328e29fd44dc53c1077f325a6e951" Nov 27 17:26:05 crc kubenswrapper[4792]: E1127 17:26:05.682093 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43685146be033002c049c7641061e9db853328e29fd44dc53c1077f325a6e951\": container with ID starting with 43685146be033002c049c7641061e9db853328e29fd44dc53c1077f325a6e951 not found: ID does not exist" containerID="43685146be033002c049c7641061e9db853328e29fd44dc53c1077f325a6e951" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.682120 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43685146be033002c049c7641061e9db853328e29fd44dc53c1077f325a6e951"} err="failed to get container status \"43685146be033002c049c7641061e9db853328e29fd44dc53c1077f325a6e951\": rpc error: code = NotFound desc = could not find container \"43685146be033002c049c7641061e9db853328e29fd44dc53c1077f325a6e951\": container with ID starting with 43685146be033002c049c7641061e9db853328e29fd44dc53c1077f325a6e951 not found: ID does not exist" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.682139 4792 scope.go:117] "RemoveContainer" containerID="efeada761340ff36c5c9a4af3fda80e195be93a772057dafa19c3b4f1c8a290d" Nov 27 17:26:05 crc kubenswrapper[4792]: E1127 17:26:05.682451 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efeada761340ff36c5c9a4af3fda80e195be93a772057dafa19c3b4f1c8a290d\": container with ID starting with efeada761340ff36c5c9a4af3fda80e195be93a772057dafa19c3b4f1c8a290d not found: ID does not exist" containerID="efeada761340ff36c5c9a4af3fda80e195be93a772057dafa19c3b4f1c8a290d" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.682470 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efeada761340ff36c5c9a4af3fda80e195be93a772057dafa19c3b4f1c8a290d"} err="failed to get container status \"efeada761340ff36c5c9a4af3fda80e195be93a772057dafa19c3b4f1c8a290d\": rpc error: code = NotFound desc = could not find container \"efeada761340ff36c5c9a4af3fda80e195be93a772057dafa19c3b4f1c8a290d\": container with ID starting with efeada761340ff36c5c9a4af3fda80e195be93a772057dafa19c3b4f1c8a290d not found: ID does not exist" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.682481 4792 scope.go:117] "RemoveContainer" containerID="6a92427c3cf162a313cc248ac186f2e69136ab3b4cba415900bec2bd9b2a0d18" Nov 27 17:26:05 crc kubenswrapper[4792]: E1127 17:26:05.682671 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a92427c3cf162a313cc248ac186f2e69136ab3b4cba415900bec2bd9b2a0d18\": container with ID starting with 6a92427c3cf162a313cc248ac186f2e69136ab3b4cba415900bec2bd9b2a0d18 not found: ID does not exist" containerID="6a92427c3cf162a313cc248ac186f2e69136ab3b4cba415900bec2bd9b2a0d18" Nov 27 17:26:05 crc kubenswrapper[4792]: I1127 17:26:05.682693 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a92427c3cf162a313cc248ac186f2e69136ab3b4cba415900bec2bd9b2a0d18"} err="failed to get container status \"6a92427c3cf162a313cc248ac186f2e69136ab3b4cba415900bec2bd9b2a0d18\": rpc error: code = NotFound desc = could not find container \"6a92427c3cf162a313cc248ac186f2e69136ab3b4cba415900bec2bd9b2a0d18\": container with ID starting with 6a92427c3cf162a313cc248ac186f2e69136ab3b4cba415900bec2bd9b2a0d18 not found: ID does not exist" Nov 27 17:26:06 crc kubenswrapper[4792]: I1127 17:26:06.003016 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07170faf-4e47-4af3-a917-9695594928fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07170faf-4e47-4af3-a917-9695594928fc" (UID: "07170faf-4e47-4af3-a917-9695594928fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:26:06 crc kubenswrapper[4792]: I1127 17:26:06.059333 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07170faf-4e47-4af3-a917-9695594928fc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:26:06 crc kubenswrapper[4792]: I1127 17:26:06.224867 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rg2ph"] Nov 27 17:26:06 crc kubenswrapper[4792]: I1127 17:26:06.231063 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rg2ph"] Nov 27 17:26:06 crc kubenswrapper[4792]: I1127 17:26:06.694768 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07170faf-4e47-4af3-a917-9695594928fc" path="/var/lib/kubelet/pods/07170faf-4e47-4af3-a917-9695594928fc/volumes" Nov 27 17:26:18 crc kubenswrapper[4792]: I1127 17:26:18.033502 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" Nov 27 17:26:37 crc kubenswrapper[4792]: I1127 17:26:37.390444 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-fbff999dd-d7fwq" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.262399 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-rhchz"] Nov 27 17:26:38 crc kubenswrapper[4792]: E1127 17:26:38.263068 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07170faf-4e47-4af3-a917-9695594928fc" containerName="registry-server" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.263093 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="07170faf-4e47-4af3-a917-9695594928fc" containerName="registry-server" Nov 27 17:26:38 crc kubenswrapper[4792]: E1127 17:26:38.263110 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07170faf-4e47-4af3-a917-9695594928fc" containerName="extract-content" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.263118 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="07170faf-4e47-4af3-a917-9695594928fc" containerName="extract-content" Nov 27 17:26:38 crc kubenswrapper[4792]: E1127 17:26:38.263137 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07170faf-4e47-4af3-a917-9695594928fc" containerName="extract-utilities" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.263145 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="07170faf-4e47-4af3-a917-9695594928fc" containerName="extract-utilities" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.263331 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="07170faf-4e47-4af3-a917-9695594928fc" containerName="registry-server" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.268482 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.271510 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27"] Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.271593 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.271945 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-fn58q" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.279932 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.301900 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.302196 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.329159 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27"] Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.379531 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-rptqb"] Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.380731 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rptqb" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.384892 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.385438 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-kvjgz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.385966 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.387608 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.391202 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cebbd73-ff6c-46b4-8b96-da44b744dc66-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-k5t27\" (UID: \"1cebbd73-ff6c-46b4-8b96-da44b744dc66\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.391267 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjcgd\" (UniqueName: \"kubernetes.io/projected/1cebbd73-ff6c-46b4-8b96-da44b744dc66-kube-api-access-cjcgd\") pod \"frr-k8s-webhook-server-7fcb986d4-k5t27\" (UID: \"1cebbd73-ff6c-46b4-8b96-da44b744dc66\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.391328 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpjlc\" (UniqueName: \"kubernetes.io/projected/a4f24305-d786-4537-b13b-86e83451bef4-kube-api-access-gpjlc\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.391364 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a4f24305-d786-4537-b13b-86e83451bef4-frr-sockets\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.391431 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a4f24305-d786-4537-b13b-86e83451bef4-frr-startup\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.391501 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a4f24305-d786-4537-b13b-86e83451bef4-metrics\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.391529 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4f24305-d786-4537-b13b-86e83451bef4-metrics-certs\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.391582 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a4f24305-d786-4537-b13b-86e83451bef4-reloader\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.391715 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a4f24305-d786-4537-b13b-86e83451bef4-frr-conf\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.403446 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-b2ds9"] Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.405458 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-b2ds9" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.408967 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.475409 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-b2ds9"] Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.496531 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cebbd73-ff6c-46b4-8b96-da44b744dc66-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-k5t27\" (UID: \"1cebbd73-ff6c-46b4-8b96-da44b744dc66\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.496595 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjcgd\" (UniqueName: \"kubernetes.io/projected/1cebbd73-ff6c-46b4-8b96-da44b744dc66-kube-api-access-cjcgd\") pod \"frr-k8s-webhook-server-7fcb986d4-k5t27\" (UID: \"1cebbd73-ff6c-46b4-8b96-da44b744dc66\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.496626 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee78f3b0-9199-41a2-ad7a-64e175706386-metrics-certs\") pod \"speaker-rptqb\" (UID: \"ee78f3b0-9199-41a2-ad7a-64e175706386\") " pod="metallb-system/speaker-rptqb" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.496664 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpjlc\" (UniqueName: \"kubernetes.io/projected/a4f24305-d786-4537-b13b-86e83451bef4-kube-api-access-gpjlc\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.496721 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a4f24305-d786-4537-b13b-86e83451bef4-frr-sockets\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.496742 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ee78f3b0-9199-41a2-ad7a-64e175706386-metallb-excludel2\") pod \"speaker-rptqb\" (UID: \"ee78f3b0-9199-41a2-ad7a-64e175706386\") " pod="metallb-system/speaker-rptqb" Nov 27 17:26:38 crc kubenswrapper[4792]: E1127 17:26:38.496756 4792 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.496783 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a4f24305-d786-4537-b13b-86e83451bef4-frr-startup\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.496813 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a4f24305-d786-4537-b13b-86e83451bef4-metrics\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: E1127 17:26:38.496841 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cebbd73-ff6c-46b4-8b96-da44b744dc66-cert podName:1cebbd73-ff6c-46b4-8b96-da44b744dc66 nodeName:}" failed. No retries permitted until 2025-11-27 17:26:38.996814161 +0000 UTC m=+1021.339640569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1cebbd73-ff6c-46b4-8b96-da44b744dc66-cert") pod "frr-k8s-webhook-server-7fcb986d4-k5t27" (UID: "1cebbd73-ff6c-46b4-8b96-da44b744dc66") : secret "frr-k8s-webhook-server-cert" not found Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.496867 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4f24305-d786-4537-b13b-86e83451bef4-metrics-certs\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.496991 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a4f24305-d786-4537-b13b-86e83451bef4-reloader\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.497035 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ee78f3b0-9199-41a2-ad7a-64e175706386-memberlist\") pod \"speaker-rptqb\" (UID: \"ee78f3b0-9199-41a2-ad7a-64e175706386\") " pod="metallb-system/speaker-rptqb" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.497091 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp28j\" (UniqueName: \"kubernetes.io/projected/ee78f3b0-9199-41a2-ad7a-64e175706386-kube-api-access-mp28j\") pod \"speaker-rptqb\" (UID: \"ee78f3b0-9199-41a2-ad7a-64e175706386\") " pod="metallb-system/speaker-rptqb" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.497111 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a4f24305-d786-4537-b13b-86e83451bef4-frr-conf\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.497172 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a4f24305-d786-4537-b13b-86e83451bef4-metrics\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.497421 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a4f24305-d786-4537-b13b-86e83451bef4-frr-sockets\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.498636 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a4f24305-d786-4537-b13b-86e83451bef4-frr-startup\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.498877 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a4f24305-d786-4537-b13b-86e83451bef4-frr-conf\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.499084 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a4f24305-d786-4537-b13b-86e83451bef4-reloader\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.502250 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a4f24305-d786-4537-b13b-86e83451bef4-metrics-certs\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.519299 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpjlc\" (UniqueName: \"kubernetes.io/projected/a4f24305-d786-4537-b13b-86e83451bef4-kube-api-access-gpjlc\") pod \"frr-k8s-rhchz\" (UID: \"a4f24305-d786-4537-b13b-86e83451bef4\") " pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.523312 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjcgd\" (UniqueName: \"kubernetes.io/projected/1cebbd73-ff6c-46b4-8b96-da44b744dc66-kube-api-access-cjcgd\") pod \"frr-k8s-webhook-server-7fcb986d4-k5t27\" (UID: \"1cebbd73-ff6c-46b4-8b96-da44b744dc66\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.598481 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ee78f3b0-9199-41a2-ad7a-64e175706386-memberlist\") pod \"speaker-rptqb\" (UID: \"ee78f3b0-9199-41a2-ad7a-64e175706386\") " pod="metallb-system/speaker-rptqb" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.598555 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xclfz\" (UniqueName: \"kubernetes.io/projected/0056c3c2-a1e5-4733-a428-fd3b91475472-kube-api-access-xclfz\") pod \"controller-f8648f98b-b2ds9\" (UID: \"0056c3c2-a1e5-4733-a428-fd3b91475472\") " pod="metallb-system/controller-f8648f98b-b2ds9" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.598594 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp28j\" (UniqueName: \"kubernetes.io/projected/ee78f3b0-9199-41a2-ad7a-64e175706386-kube-api-access-mp28j\") pod \"speaker-rptqb\" (UID: \"ee78f3b0-9199-41a2-ad7a-64e175706386\") " pod="metallb-system/speaker-rptqb" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.598683 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee78f3b0-9199-41a2-ad7a-64e175706386-metrics-certs\") pod \"speaker-rptqb\" (UID: \"ee78f3b0-9199-41a2-ad7a-64e175706386\") " pod="metallb-system/speaker-rptqb" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.598723 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ee78f3b0-9199-41a2-ad7a-64e175706386-metallb-excludel2\") pod \"speaker-rptqb\" (UID: \"ee78f3b0-9199-41a2-ad7a-64e175706386\") " pod="metallb-system/speaker-rptqb" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.598763 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0056c3c2-a1e5-4733-a428-fd3b91475472-metrics-certs\") pod \"controller-f8648f98b-b2ds9\" (UID: \"0056c3c2-a1e5-4733-a428-fd3b91475472\") " pod="metallb-system/controller-f8648f98b-b2ds9" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.598796 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0056c3c2-a1e5-4733-a428-fd3b91475472-cert\") pod \"controller-f8648f98b-b2ds9\" (UID: \"0056c3c2-a1e5-4733-a428-fd3b91475472\") " pod="metallb-system/controller-f8648f98b-b2ds9" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.599993 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.600140 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.601162 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.602342 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-fn58q" Nov 27 17:26:38 crc kubenswrapper[4792]: E1127 17:26:38.609552 4792 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 27 17:26:38 crc kubenswrapper[4792]: E1127 17:26:38.609634 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee78f3b0-9199-41a2-ad7a-64e175706386-memberlist podName:ee78f3b0-9199-41a2-ad7a-64e175706386 nodeName:}" failed. No retries permitted until 2025-11-27 17:26:39.109612163 +0000 UTC m=+1021.452438481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ee78f3b0-9199-41a2-ad7a-64e175706386-memberlist") pod "speaker-rptqb" (UID: "ee78f3b0-9199-41a2-ad7a-64e175706386") : secret "metallb-memberlist" not found Nov 27 17:26:38 crc kubenswrapper[4792]: E1127 17:26:38.609552 4792 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 27 17:26:38 crc kubenswrapper[4792]: E1127 17:26:38.609693 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee78f3b0-9199-41a2-ad7a-64e175706386-metrics-certs podName:ee78f3b0-9199-41a2-ad7a-64e175706386 nodeName:}" failed. No retries permitted until 2025-11-27 17:26:39.109685384 +0000 UTC m=+1021.452511702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ee78f3b0-9199-41a2-ad7a-64e175706386-metrics-certs") pod "speaker-rptqb" (UID: "ee78f3b0-9199-41a2-ad7a-64e175706386") : secret "speaker-certs-secret" not found Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.610317 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ee78f3b0-9199-41a2-ad7a-64e175706386-metallb-excludel2\") pod \"speaker-rptqb\" (UID: \"ee78f3b0-9199-41a2-ad7a-64e175706386\") " pod="metallb-system/speaker-rptqb" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.612309 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.619112 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp28j\" (UniqueName: \"kubernetes.io/projected/ee78f3b0-9199-41a2-ad7a-64e175706386-kube-api-access-mp28j\") pod \"speaker-rptqb\" (UID: \"ee78f3b0-9199-41a2-ad7a-64e175706386\") " pod="metallb-system/speaker-rptqb" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.700325 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xclfz\" (UniqueName: \"kubernetes.io/projected/0056c3c2-a1e5-4733-a428-fd3b91475472-kube-api-access-xclfz\") pod \"controller-f8648f98b-b2ds9\" (UID: \"0056c3c2-a1e5-4733-a428-fd3b91475472\") " pod="metallb-system/controller-f8648f98b-b2ds9" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.700441 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0056c3c2-a1e5-4733-a428-fd3b91475472-metrics-certs\") pod \"controller-f8648f98b-b2ds9\" (UID: \"0056c3c2-a1e5-4733-a428-fd3b91475472\") " pod="metallb-system/controller-f8648f98b-b2ds9" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.700472 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0056c3c2-a1e5-4733-a428-fd3b91475472-cert\") pod \"controller-f8648f98b-b2ds9\" (UID: \"0056c3c2-a1e5-4733-a428-fd3b91475472\") " pod="metallb-system/controller-f8648f98b-b2ds9" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.704449 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.704569 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.714305 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0056c3c2-a1e5-4733-a428-fd3b91475472-cert\") pod \"controller-f8648f98b-b2ds9\" (UID: \"0056c3c2-a1e5-4733-a428-fd3b91475472\") " pod="metallb-system/controller-f8648f98b-b2ds9" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.714916 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0056c3c2-a1e5-4733-a428-fd3b91475472-metrics-certs\") pod \"controller-f8648f98b-b2ds9\" (UID: \"0056c3c2-a1e5-4733-a428-fd3b91475472\") " pod="metallb-system/controller-f8648f98b-b2ds9" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.726332 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xclfz\" (UniqueName: \"kubernetes.io/projected/0056c3c2-a1e5-4733-a428-fd3b91475472-kube-api-access-xclfz\") pod \"controller-f8648f98b-b2ds9\" (UID: \"0056c3c2-a1e5-4733-a428-fd3b91475472\") " pod="metallb-system/controller-f8648f98b-b2ds9" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.784993 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-b2ds9" Nov 27 17:26:38 crc kubenswrapper[4792]: I1127 17:26:38.868210 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhchz" event={"ID":"a4f24305-d786-4537-b13b-86e83451bef4","Type":"ContainerStarted","Data":"a77791176951d83d34569a14550395a1798bb85a925c275cea1ee55b4d413757"} Nov 27 17:26:39 crc kubenswrapper[4792]: I1127 17:26:39.006521 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cebbd73-ff6c-46b4-8b96-da44b744dc66-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-k5t27\" (UID: \"1cebbd73-ff6c-46b4-8b96-da44b744dc66\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27" Nov 27 17:26:39 crc kubenswrapper[4792]: I1127 17:26:39.011202 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cebbd73-ff6c-46b4-8b96-da44b744dc66-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-k5t27\" (UID: \"1cebbd73-ff6c-46b4-8b96-da44b744dc66\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27" Nov 27 17:26:39 crc kubenswrapper[4792]: I1127 17:26:39.209601 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ee78f3b0-9199-41a2-ad7a-64e175706386-memberlist\") pod \"speaker-rptqb\" (UID: \"ee78f3b0-9199-41a2-ad7a-64e175706386\") " pod="metallb-system/speaker-rptqb" Nov 27 17:26:39 crc kubenswrapper[4792]: I1127 17:26:39.209733 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee78f3b0-9199-41a2-ad7a-64e175706386-metrics-certs\") pod \"speaker-rptqb\" (UID: \"ee78f3b0-9199-41a2-ad7a-64e175706386\") " pod="metallb-system/speaker-rptqb" Nov 27 17:26:39 crc kubenswrapper[4792]: E1127 17:26:39.209969 4792 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 27 17:26:39 crc kubenswrapper[4792]: E1127 17:26:39.210097 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee78f3b0-9199-41a2-ad7a-64e175706386-memberlist podName:ee78f3b0-9199-41a2-ad7a-64e175706386 nodeName:}" failed. No retries permitted until 2025-11-27 17:26:40.210070668 +0000 UTC m=+1022.552896996 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ee78f3b0-9199-41a2-ad7a-64e175706386-memberlist") pod "speaker-rptqb" (UID: "ee78f3b0-9199-41a2-ad7a-64e175706386") : secret "metallb-memberlist" not found Nov 27 17:26:39 crc kubenswrapper[4792]: I1127 17:26:39.212745 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee78f3b0-9199-41a2-ad7a-64e175706386-metrics-certs\") pod \"speaker-rptqb\" (UID: \"ee78f3b0-9199-41a2-ad7a-64e175706386\") " pod="metallb-system/speaker-rptqb" Nov 27 17:26:39 crc kubenswrapper[4792]: I1127 17:26:39.224873 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27" Nov 27 17:26:39 crc kubenswrapper[4792]: I1127 17:26:39.242564 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-b2ds9"] Nov 27 17:26:39 crc kubenswrapper[4792]: W1127 17:26:39.243570 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0056c3c2_a1e5_4733_a428_fd3b91475472.slice/crio-c3ccb0b8adb05ef1b141265c129b6add7003ef9f055e66d32781d16938ec4080 WatchSource:0}: Error finding container c3ccb0b8adb05ef1b141265c129b6add7003ef9f055e66d32781d16938ec4080: Status 404 returned error can't find the container with id c3ccb0b8adb05ef1b141265c129b6add7003ef9f055e66d32781d16938ec4080 Nov 27 17:26:39 crc kubenswrapper[4792]: I1127 17:26:39.676801 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27"] Nov 27 17:26:39 crc kubenswrapper[4792]: W1127 17:26:39.692876 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cebbd73_ff6c_46b4_8b96_da44b744dc66.slice/crio-aacacf836ba4e4970cbdcdc1447ebb76fdbafb14fb8552ea9112289a86175793 WatchSource:0}: Error finding container aacacf836ba4e4970cbdcdc1447ebb76fdbafb14fb8552ea9112289a86175793: Status 404 returned error can't find the container with id aacacf836ba4e4970cbdcdc1447ebb76fdbafb14fb8552ea9112289a86175793 Nov 27 17:26:39 crc kubenswrapper[4792]: I1127 17:26:39.880015 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-b2ds9" event={"ID":"0056c3c2-a1e5-4733-a428-fd3b91475472","Type":"ContainerStarted","Data":"c0208301d707da10169d5c50589307dcb5a28b85a539f49849d2162d971f19fb"} Nov 27 17:26:39 crc kubenswrapper[4792]: I1127 17:26:39.880422 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-b2ds9" Nov 27 17:26:39 crc kubenswrapper[4792]: I1127 17:26:39.880443 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-b2ds9" event={"ID":"0056c3c2-a1e5-4733-a428-fd3b91475472","Type":"ContainerStarted","Data":"c4232a352e33a270ac9e2f48da852c3fdc55301befb414cdff822f3eb83f2a34"} Nov 27 17:26:39 crc kubenswrapper[4792]: I1127 17:26:39.880460 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-b2ds9" event={"ID":"0056c3c2-a1e5-4733-a428-fd3b91475472","Type":"ContainerStarted","Data":"c3ccb0b8adb05ef1b141265c129b6add7003ef9f055e66d32781d16938ec4080"} Nov 27 17:26:39 crc kubenswrapper[4792]: I1127 17:26:39.881315 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27" event={"ID":"1cebbd73-ff6c-46b4-8b96-da44b744dc66","Type":"ContainerStarted","Data":"aacacf836ba4e4970cbdcdc1447ebb76fdbafb14fb8552ea9112289a86175793"} Nov 27 17:26:39 crc kubenswrapper[4792]: I1127 17:26:39.905342 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-b2ds9" podStartSLOduration=1.905290022 podStartE2EDuration="1.905290022s" podCreationTimestamp="2025-11-27 17:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:26:39.90112666 +0000 UTC m=+1022.243952988" watchObservedRunningTime="2025-11-27 17:26:39.905290022 +0000 UTC m=+1022.248116350" Nov 27 17:26:40 crc kubenswrapper[4792]: I1127 17:26:40.228779 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ee78f3b0-9199-41a2-ad7a-64e175706386-memberlist\") pod \"speaker-rptqb\" (UID: \"ee78f3b0-9199-41a2-ad7a-64e175706386\") " pod="metallb-system/speaker-rptqb" Nov 27 17:26:40 crc kubenswrapper[4792]: I1127 17:26:40.241330 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ee78f3b0-9199-41a2-ad7a-64e175706386-memberlist\") pod \"speaker-rptqb\" (UID: \"ee78f3b0-9199-41a2-ad7a-64e175706386\") " pod="metallb-system/speaker-rptqb" Nov 27 17:26:40 crc kubenswrapper[4792]: I1127 17:26:40.498976 4792 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-kvjgz" Nov 27 17:26:40 crc kubenswrapper[4792]: I1127 17:26:40.505208 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rptqb" Nov 27 17:26:40 crc kubenswrapper[4792]: I1127 17:26:40.891295 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rptqb" event={"ID":"ee78f3b0-9199-41a2-ad7a-64e175706386","Type":"ContainerStarted","Data":"eb92901b128086b243d4c4974e4e8e22413778a250a1ce9edb98db9512e52cdb"} Nov 27 17:26:41 crc kubenswrapper[4792]: I1127 17:26:41.901093 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rptqb" event={"ID":"ee78f3b0-9199-41a2-ad7a-64e175706386","Type":"ContainerStarted","Data":"3762c40dc814f6c37dae6a893d1ec23a2c5f1c300f2aecf472c42b7bb1664420"} Nov 27 17:26:41 crc kubenswrapper[4792]: I1127 17:26:41.901146 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rptqb" event={"ID":"ee78f3b0-9199-41a2-ad7a-64e175706386","Type":"ContainerStarted","Data":"34b570b0986c20fde19bdd4b6dfb56d9174f06530f624c56b809e55fbafa736d"} Nov 27 17:26:41 crc kubenswrapper[4792]: I1127 17:26:41.901268 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-rptqb" Nov 27 17:26:41 crc kubenswrapper[4792]: I1127 17:26:41.923569 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-rptqb" podStartSLOduration=3.923553158 podStartE2EDuration="3.923553158s" podCreationTimestamp="2025-11-27 17:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:26:41.921876716 +0000 UTC m=+1024.264703044" watchObservedRunningTime="2025-11-27 17:26:41.923553158 +0000 UTC m=+1024.266379466" Nov 27 17:26:47 crc kubenswrapper[4792]: I1127 17:26:47.953231 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27" event={"ID":"1cebbd73-ff6c-46b4-8b96-da44b744dc66","Type":"ContainerStarted","Data":"526bf6ca174a658938c6640147ae6f5820c98909562ac5bc005e84e5103b3ee5"} Nov 27 17:26:47 crc kubenswrapper[4792]: I1127 17:26:47.953739 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27" Nov 27 17:26:47 crc kubenswrapper[4792]: I1127 17:26:47.957267 4792 generic.go:334] "Generic (PLEG): container finished" podID="a4f24305-d786-4537-b13b-86e83451bef4" containerID="206f9e331008ddd407bee12489680ecbe589abdb3e095c017cee46a2062719e9" exitCode=0 Nov 27 17:26:47 crc kubenswrapper[4792]: I1127 17:26:47.957347 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhchz" event={"ID":"a4f24305-d786-4537-b13b-86e83451bef4","Type":"ContainerDied","Data":"206f9e331008ddd407bee12489680ecbe589abdb3e095c017cee46a2062719e9"} Nov 27 17:26:47 crc kubenswrapper[4792]: I1127 17:26:47.977094 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27" podStartSLOduration=2.569410992 podStartE2EDuration="9.977075045s" podCreationTimestamp="2025-11-27 17:26:38 +0000 UTC" firstStartedPulling="2025-11-27 17:26:39.696162473 +0000 UTC m=+1022.038988791" lastFinishedPulling="2025-11-27 17:26:47.103826526 +0000 UTC m=+1029.446652844" observedRunningTime="2025-11-27 17:26:47.974976483 +0000 UTC m=+1030.317802811" watchObservedRunningTime="2025-11-27 17:26:47.977075045 +0000 UTC m=+1030.319901363" Nov 27 17:26:48 crc kubenswrapper[4792]: E1127 17:26:48.204635 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4f24305_d786_4537_b13b_86e83451bef4.slice/crio-eba5022b057ec5fc4818b0390d2503e27d087785404ebbfbf47c6bf9c9408b0f.scope\": RecentStats: unable to find data in memory cache]" Nov 27 17:26:48 crc kubenswrapper[4792]: E1127 17:26:48.204658 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4f24305_d786_4537_b13b_86e83451bef4.slice/crio-eba5022b057ec5fc4818b0390d2503e27d087785404ebbfbf47c6bf9c9408b0f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4f24305_d786_4537_b13b_86e83451bef4.slice/crio-conmon-eba5022b057ec5fc4818b0390d2503e27d087785404ebbfbf47c6bf9c9408b0f.scope\": RecentStats: unable to find data in memory cache]" Nov 27 17:26:48 crc kubenswrapper[4792]: I1127 17:26:48.964228 4792 generic.go:334] "Generic (PLEG): container finished" podID="a4f24305-d786-4537-b13b-86e83451bef4" containerID="eba5022b057ec5fc4818b0390d2503e27d087785404ebbfbf47c6bf9c9408b0f" exitCode=0 Nov 27 17:26:48 crc kubenswrapper[4792]: I1127 17:26:48.964386 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhchz" event={"ID":"a4f24305-d786-4537-b13b-86e83451bef4","Type":"ContainerDied","Data":"eba5022b057ec5fc4818b0390d2503e27d087785404ebbfbf47c6bf9c9408b0f"} Nov 27 17:26:49 crc kubenswrapper[4792]: I1127 17:26:49.978072 4792 generic.go:334] "Generic (PLEG): container finished" podID="a4f24305-d786-4537-b13b-86e83451bef4" containerID="afe25eaeefffdbe337db5abf3812c3d9ed2b2f81f5c8dff506314cd5512c85f4" exitCode=0 Nov 27 17:26:49 crc kubenswrapper[4792]: I1127 17:26:49.978160 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhchz" event={"ID":"a4f24305-d786-4537-b13b-86e83451bef4","Type":"ContainerDied","Data":"afe25eaeefffdbe337db5abf3812c3d9ed2b2f81f5c8dff506314cd5512c85f4"} Nov 27 17:26:50 crc kubenswrapper[4792]: I1127 17:26:50.508998 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-rptqb" Nov 27 17:26:50 crc kubenswrapper[4792]: I1127 17:26:50.992402 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhchz" event={"ID":"a4f24305-d786-4537-b13b-86e83451bef4","Type":"ContainerStarted","Data":"59abb8d6082ffad97b5a96db0a5c6442c6a1591efcc81892c10768c67b8aae2d"} Nov 27 17:26:50 crc kubenswrapper[4792]: I1127 17:26:50.992550 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhchz" event={"ID":"a4f24305-d786-4537-b13b-86e83451bef4","Type":"ContainerStarted","Data":"2450666a7c0f5b5c0e9ec41bc85422f90aacfcee8d4f660f80f81267bcb64ac6"} Nov 27 17:26:50 crc kubenswrapper[4792]: I1127 17:26:50.992697 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhchz" event={"ID":"a4f24305-d786-4537-b13b-86e83451bef4","Type":"ContainerStarted","Data":"1930b9a2f1907dda2200c2da990055cda0d720544999c608d63ebfcff306733a"} Nov 27 17:26:50 crc kubenswrapper[4792]: I1127 17:26:50.992727 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhchz" event={"ID":"a4f24305-d786-4537-b13b-86e83451bef4","Type":"ContainerStarted","Data":"affe15f810e79067d5794071924f31e07b31fc7d749c5705c61191c51bf675f8"} Nov 27 17:26:52 crc kubenswrapper[4792]: I1127 17:26:52.007978 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhchz" event={"ID":"a4f24305-d786-4537-b13b-86e83451bef4","Type":"ContainerStarted","Data":"37c019ef285366d24f3f54283fd469825d076cd412da4617d82c2078079777ae"} Nov 27 17:26:52 crc kubenswrapper[4792]: I1127 17:26:52.008325 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:52 crc kubenswrapper[4792]: I1127 17:26:52.008349 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhchz" event={"ID":"a4f24305-d786-4537-b13b-86e83451bef4","Type":"ContainerStarted","Data":"0d758debf517ff6bdb19734205e75ee0f43745fdc279864dc3cb83ac8b5bb297"} Nov 27 17:26:52 crc kubenswrapper[4792]: I1127 17:26:52.034832 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-rhchz" podStartSLOduration=5.72188534 podStartE2EDuration="14.034780677s" podCreationTimestamp="2025-11-27 17:26:38 +0000 UTC" firstStartedPulling="2025-11-27 17:26:38.784356797 +0000 UTC m=+1021.127183115" lastFinishedPulling="2025-11-27 17:26:47.097252134 +0000 UTC m=+1029.440078452" observedRunningTime="2025-11-27 17:26:52.030151513 +0000 UTC m=+1034.372977901" watchObservedRunningTime="2025-11-27 17:26:52.034780677 +0000 UTC m=+1034.377606995" Nov 27 17:26:53 crc kubenswrapper[4792]: I1127 17:26:53.373787 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9dw9f"] Nov 27 17:26:53 crc kubenswrapper[4792]: I1127 17:26:53.374968 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9dw9f" Nov 27 17:26:53 crc kubenswrapper[4792]: I1127 17:26:53.377566 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5jbzj" Nov 27 17:26:53 crc kubenswrapper[4792]: I1127 17:26:53.377765 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 27 17:26:53 crc kubenswrapper[4792]: I1127 17:26:53.378149 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 27 17:26:53 crc kubenswrapper[4792]: I1127 17:26:53.387547 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9dw9f"] Nov 27 17:26:53 crc kubenswrapper[4792]: I1127 17:26:53.509041 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6d2g\" (UniqueName: \"kubernetes.io/projected/2178e2fa-85e7-4960-8652-aa088f655a41-kube-api-access-b6d2g\") pod \"openstack-operator-index-9dw9f\" (UID: \"2178e2fa-85e7-4960-8652-aa088f655a41\") " pod="openstack-operators/openstack-operator-index-9dw9f" Nov 27 17:26:53 crc kubenswrapper[4792]: I1127 17:26:53.610289 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6d2g\" (UniqueName: \"kubernetes.io/projected/2178e2fa-85e7-4960-8652-aa088f655a41-kube-api-access-b6d2g\") pod \"openstack-operator-index-9dw9f\" (UID: \"2178e2fa-85e7-4960-8652-aa088f655a41\") " pod="openstack-operators/openstack-operator-index-9dw9f" Nov 27 17:26:53 crc kubenswrapper[4792]: I1127 17:26:53.612487 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:53 crc kubenswrapper[4792]: I1127 17:26:53.632100 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6d2g\" (UniqueName: \"kubernetes.io/projected/2178e2fa-85e7-4960-8652-aa088f655a41-kube-api-access-b6d2g\") pod \"openstack-operator-index-9dw9f\" (UID: \"2178e2fa-85e7-4960-8652-aa088f655a41\") " pod="openstack-operators/openstack-operator-index-9dw9f" Nov 27 17:26:53 crc kubenswrapper[4792]: I1127 17:26:53.655223 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-rhchz" Nov 27 17:26:53 crc kubenswrapper[4792]: I1127 17:26:53.700804 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9dw9f" Nov 27 17:26:54 crc kubenswrapper[4792]: I1127 17:26:54.200392 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9dw9f"] Nov 27 17:26:55 crc kubenswrapper[4792]: I1127 17:26:55.039891 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9dw9f" event={"ID":"2178e2fa-85e7-4960-8652-aa088f655a41","Type":"ContainerStarted","Data":"ac38ee53e1097a5c1e883fefa25dfffb6de7aea7d140f80bc7e26d1765835a08"} Nov 27 17:26:56 crc kubenswrapper[4792]: I1127 17:26:56.775118 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9dw9f"] Nov 27 17:26:57 crc kubenswrapper[4792]: I1127 17:26:57.364664 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-n6smr"] Nov 27 17:26:57 crc kubenswrapper[4792]: I1127 17:26:57.366208 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n6smr" Nov 27 17:26:57 crc kubenswrapper[4792]: I1127 17:26:57.380803 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n6smr"] Nov 27 17:26:57 crc kubenswrapper[4792]: I1127 17:26:57.487823 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gttwx\" (UniqueName: \"kubernetes.io/projected/ad9fe5d7-1539-4597-b2b7-5fc5cf555264-kube-api-access-gttwx\") pod \"openstack-operator-index-n6smr\" (UID: \"ad9fe5d7-1539-4597-b2b7-5fc5cf555264\") " pod="openstack-operators/openstack-operator-index-n6smr" Nov 27 17:26:57 crc kubenswrapper[4792]: I1127 17:26:57.589573 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gttwx\" (UniqueName: \"kubernetes.io/projected/ad9fe5d7-1539-4597-b2b7-5fc5cf555264-kube-api-access-gttwx\") pod \"openstack-operator-index-n6smr\" (UID: \"ad9fe5d7-1539-4597-b2b7-5fc5cf555264\") " pod="openstack-operators/openstack-operator-index-n6smr" Nov 27 17:26:57 crc kubenswrapper[4792]: I1127 17:26:57.610915 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gttwx\" (UniqueName: \"kubernetes.io/projected/ad9fe5d7-1539-4597-b2b7-5fc5cf555264-kube-api-access-gttwx\") pod \"openstack-operator-index-n6smr\" (UID: \"ad9fe5d7-1539-4597-b2b7-5fc5cf555264\") " pod="openstack-operators/openstack-operator-index-n6smr" Nov 27 17:26:57 crc kubenswrapper[4792]: I1127 17:26:57.692926 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n6smr" Nov 27 17:26:58 crc kubenswrapper[4792]: I1127 17:26:58.064109 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9dw9f" event={"ID":"2178e2fa-85e7-4960-8652-aa088f655a41","Type":"ContainerStarted","Data":"1b974b4976784ce085f25b9ecea86850671b56856fd24ae288f291a724490181"} Nov 27 17:26:58 crc kubenswrapper[4792]: I1127 17:26:58.064196 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-9dw9f" podUID="2178e2fa-85e7-4960-8652-aa088f655a41" containerName="registry-server" containerID="cri-o://1b974b4976784ce085f25b9ecea86850671b56856fd24ae288f291a724490181" gracePeriod=2 Nov 27 17:26:58 crc kubenswrapper[4792]: I1127 17:26:58.083089 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9dw9f" podStartSLOduration=2.118812339 podStartE2EDuration="5.083067318s" podCreationTimestamp="2025-11-27 17:26:53 +0000 UTC" firstStartedPulling="2025-11-27 17:26:54.195677678 +0000 UTC m=+1036.538503996" lastFinishedPulling="2025-11-27 17:26:57.159932617 +0000 UTC m=+1039.502758975" observedRunningTime="2025-11-27 17:26:58.078748801 +0000 UTC m=+1040.421575119" watchObservedRunningTime="2025-11-27 17:26:58.083067318 +0000 UTC m=+1040.425893676" Nov 27 17:26:58 crc kubenswrapper[4792]: I1127 17:26:58.152329 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n6smr"] Nov 27 17:26:58 crc kubenswrapper[4792]: I1127 17:26:58.587315 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9dw9f" Nov 27 17:26:58 crc kubenswrapper[4792]: I1127 17:26:58.711140 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6d2g\" (UniqueName: \"kubernetes.io/projected/2178e2fa-85e7-4960-8652-aa088f655a41-kube-api-access-b6d2g\") pod \"2178e2fa-85e7-4960-8652-aa088f655a41\" (UID: \"2178e2fa-85e7-4960-8652-aa088f655a41\") " Nov 27 17:26:58 crc kubenswrapper[4792]: I1127 17:26:58.717135 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2178e2fa-85e7-4960-8652-aa088f655a41-kube-api-access-b6d2g" (OuterVolumeSpecName: "kube-api-access-b6d2g") pod "2178e2fa-85e7-4960-8652-aa088f655a41" (UID: "2178e2fa-85e7-4960-8652-aa088f655a41"). InnerVolumeSpecName "kube-api-access-b6d2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:26:58 crc kubenswrapper[4792]: I1127 17:26:58.790561 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-b2ds9" Nov 27 17:26:58 crc kubenswrapper[4792]: I1127 17:26:58.814326 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6d2g\" (UniqueName: \"kubernetes.io/projected/2178e2fa-85e7-4960-8652-aa088f655a41-kube-api-access-b6d2g\") on node \"crc\" DevicePath \"\"" Nov 27 17:26:59 crc kubenswrapper[4792]: I1127 17:26:59.082124 4792 generic.go:334] "Generic (PLEG): container finished" podID="2178e2fa-85e7-4960-8652-aa088f655a41" containerID="1b974b4976784ce085f25b9ecea86850671b56856fd24ae288f291a724490181" exitCode=0 Nov 27 17:26:59 crc kubenswrapper[4792]: I1127 17:26:59.082196 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9dw9f" Nov 27 17:26:59 crc kubenswrapper[4792]: I1127 17:26:59.082225 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9dw9f" event={"ID":"2178e2fa-85e7-4960-8652-aa088f655a41","Type":"ContainerDied","Data":"1b974b4976784ce085f25b9ecea86850671b56856fd24ae288f291a724490181"} Nov 27 17:26:59 crc kubenswrapper[4792]: I1127 17:26:59.084277 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9dw9f" event={"ID":"2178e2fa-85e7-4960-8652-aa088f655a41","Type":"ContainerDied","Data":"ac38ee53e1097a5c1e883fefa25dfffb6de7aea7d140f80bc7e26d1765835a08"} Nov 27 17:26:59 crc kubenswrapper[4792]: I1127 17:26:59.084313 4792 scope.go:117] "RemoveContainer" containerID="1b974b4976784ce085f25b9ecea86850671b56856fd24ae288f291a724490181" Nov 27 17:26:59 crc kubenswrapper[4792]: I1127 17:26:59.091809 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n6smr" event={"ID":"ad9fe5d7-1539-4597-b2b7-5fc5cf555264","Type":"ContainerStarted","Data":"3258c6abbc58bf946a9e4a99f1c90a24e2f10424d9f52dfe35f1bbf24467f825"} Nov 27 17:26:59 crc kubenswrapper[4792]: I1127 17:26:59.091885 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n6smr" event={"ID":"ad9fe5d7-1539-4597-b2b7-5fc5cf555264","Type":"ContainerStarted","Data":"b9bf2070e87f73adaac76ccaec637053ade62a3035470754129acac47d0682fa"} Nov 27 17:26:59 crc kubenswrapper[4792]: I1127 17:26:59.125839 4792 scope.go:117] "RemoveContainer" containerID="1b974b4976784ce085f25b9ecea86850671b56856fd24ae288f291a724490181" Nov 27 17:26:59 crc kubenswrapper[4792]: E1127 17:26:59.126541 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b974b4976784ce085f25b9ecea86850671b56856fd24ae288f291a724490181\": container with ID starting with 1b974b4976784ce085f25b9ecea86850671b56856fd24ae288f291a724490181 not found: ID does not exist" containerID="1b974b4976784ce085f25b9ecea86850671b56856fd24ae288f291a724490181" Nov 27 17:26:59 crc kubenswrapper[4792]: I1127 17:26:59.126810 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b974b4976784ce085f25b9ecea86850671b56856fd24ae288f291a724490181"} err="failed to get container status \"1b974b4976784ce085f25b9ecea86850671b56856fd24ae288f291a724490181\": rpc error: code = NotFound desc = could not find container \"1b974b4976784ce085f25b9ecea86850671b56856fd24ae288f291a724490181\": container with ID starting with 1b974b4976784ce085f25b9ecea86850671b56856fd24ae288f291a724490181 not found: ID does not exist" Nov 27 17:26:59 crc kubenswrapper[4792]: I1127 17:26:59.128880 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-n6smr" podStartSLOduration=2.077581445 podStartE2EDuration="2.128863154s" podCreationTimestamp="2025-11-27 17:26:57 +0000 UTC" firstStartedPulling="2025-11-27 17:26:58.153325427 +0000 UTC m=+1040.496151745" lastFinishedPulling="2025-11-27 17:26:58.204607136 +0000 UTC m=+1040.547433454" observedRunningTime="2025-11-27 17:26:59.117171125 +0000 UTC m=+1041.459997503" watchObservedRunningTime="2025-11-27 17:26:59.128863154 +0000 UTC m=+1041.471689482" Nov 27 17:26:59 crc kubenswrapper[4792]: I1127 17:26:59.148521 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9dw9f"] Nov 27 17:26:59 crc kubenswrapper[4792]: I1127 17:26:59.155666 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-9dw9f"] Nov 27 17:26:59 crc kubenswrapper[4792]: I1127 17:26:59.230505 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-k5t27" Nov 27 17:27:00 crc kubenswrapper[4792]: I1127 17:27:00.704356 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2178e2fa-85e7-4960-8652-aa088f655a41" path="/var/lib/kubelet/pods/2178e2fa-85e7-4960-8652-aa088f655a41/volumes" Nov 27 17:27:07 crc kubenswrapper[4792]: I1127 17:27:07.693912 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-n6smr" Nov 27 17:27:07 crc kubenswrapper[4792]: I1127 17:27:07.694761 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-n6smr" Nov 27 17:27:07 crc kubenswrapper[4792]: I1127 17:27:07.744074 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-n6smr" Nov 27 17:27:08 crc kubenswrapper[4792]: I1127 17:27:08.221011 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-n6smr" Nov 27 17:27:08 crc kubenswrapper[4792]: I1127 17:27:08.630735 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-rhchz" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.013751 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k"] Nov 27 17:27:09 crc kubenswrapper[4792]: E1127 17:27:09.014359 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2178e2fa-85e7-4960-8652-aa088f655a41" containerName="registry-server" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.014371 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2178e2fa-85e7-4960-8652-aa088f655a41" containerName="registry-server" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.014523 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2178e2fa-85e7-4960-8652-aa088f655a41" containerName="registry-server" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.015521 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.020457 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k"] Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.023386 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tg55d" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.143280 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h6qx\" (UniqueName: \"kubernetes.io/projected/818171bc-2f19-4297-92b9-a01e361b6387-kube-api-access-5h6qx\") pod \"6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k\" (UID: \"818171bc-2f19-4297-92b9-a01e361b6387\") " pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.144079 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/818171bc-2f19-4297-92b9-a01e361b6387-bundle\") pod \"6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k\" (UID: \"818171bc-2f19-4297-92b9-a01e361b6387\") " pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.144229 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/818171bc-2f19-4297-92b9-a01e361b6387-util\") pod \"6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k\" (UID: \"818171bc-2f19-4297-92b9-a01e361b6387\") " pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.245217 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h6qx\" (UniqueName: \"kubernetes.io/projected/818171bc-2f19-4297-92b9-a01e361b6387-kube-api-access-5h6qx\") pod \"6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k\" (UID: \"818171bc-2f19-4297-92b9-a01e361b6387\") " pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.245542 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/818171bc-2f19-4297-92b9-a01e361b6387-bundle\") pod \"6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k\" (UID: \"818171bc-2f19-4297-92b9-a01e361b6387\") " pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.245677 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/818171bc-2f19-4297-92b9-a01e361b6387-util\") pod \"6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k\" (UID: \"818171bc-2f19-4297-92b9-a01e361b6387\") " pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.246173 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/818171bc-2f19-4297-92b9-a01e361b6387-bundle\") pod \"6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k\" (UID: \"818171bc-2f19-4297-92b9-a01e361b6387\") " pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.246221 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/818171bc-2f19-4297-92b9-a01e361b6387-util\") pod \"6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k\" (UID: \"818171bc-2f19-4297-92b9-a01e361b6387\") " pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.276778 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h6qx\" (UniqueName: \"kubernetes.io/projected/818171bc-2f19-4297-92b9-a01e361b6387-kube-api-access-5h6qx\") pod \"6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k\" (UID: \"818171bc-2f19-4297-92b9-a01e361b6387\") " pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.353261 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" Nov 27 17:27:09 crc kubenswrapper[4792]: I1127 17:27:09.824008 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k"] Nov 27 17:27:09 crc kubenswrapper[4792]: W1127 17:27:09.828885 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod818171bc_2f19_4297_92b9_a01e361b6387.slice/crio-9e0200ee90d1855001e4219bfafae76155b83a6c0a61891b567880af15c9308b WatchSource:0}: Error finding container 9e0200ee90d1855001e4219bfafae76155b83a6c0a61891b567880af15c9308b: Status 404 returned error can't find the container with id 9e0200ee90d1855001e4219bfafae76155b83a6c0a61891b567880af15c9308b Nov 27 17:27:10 crc kubenswrapper[4792]: I1127 17:27:10.198989 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" event={"ID":"818171bc-2f19-4297-92b9-a01e361b6387","Type":"ContainerStarted","Data":"9e0200ee90d1855001e4219bfafae76155b83a6c0a61891b567880af15c9308b"} Nov 27 17:27:11 crc kubenswrapper[4792]: I1127 17:27:11.211492 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" event={"ID":"818171bc-2f19-4297-92b9-a01e361b6387","Type":"ContainerStarted","Data":"c455f0c273076e0ae452a28ee75f0ef46b6d52aa605c29e447c8a0cba3fab8db"} Nov 27 17:27:12 crc kubenswrapper[4792]: I1127 17:27:12.226602 4792 generic.go:334] "Generic (PLEG): container finished" podID="818171bc-2f19-4297-92b9-a01e361b6387" containerID="c455f0c273076e0ae452a28ee75f0ef46b6d52aa605c29e447c8a0cba3fab8db" exitCode=0 Nov 27 17:27:12 crc kubenswrapper[4792]: I1127 17:27:12.226742 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" event={"ID":"818171bc-2f19-4297-92b9-a01e361b6387","Type":"ContainerDied","Data":"c455f0c273076e0ae452a28ee75f0ef46b6d52aa605c29e447c8a0cba3fab8db"} Nov 27 17:27:13 crc kubenswrapper[4792]: I1127 17:27:13.241110 4792 generic.go:334] "Generic (PLEG): container finished" podID="818171bc-2f19-4297-92b9-a01e361b6387" containerID="749f4195f2cc5aa069186c19918032b161e277783b3aaec4e55485b65b4f1271" exitCode=0 Nov 27 17:27:13 crc kubenswrapper[4792]: I1127 17:27:13.241165 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" event={"ID":"818171bc-2f19-4297-92b9-a01e361b6387","Type":"ContainerDied","Data":"749f4195f2cc5aa069186c19918032b161e277783b3aaec4e55485b65b4f1271"} Nov 27 17:27:14 crc kubenswrapper[4792]: I1127 17:27:14.255375 4792 generic.go:334] "Generic (PLEG): container finished" podID="818171bc-2f19-4297-92b9-a01e361b6387" containerID="6e043add29dc758859fda2f15c03cd23c085631f9de710c9a918271702df4c98" exitCode=0 Nov 27 17:27:14 crc kubenswrapper[4792]: I1127 17:27:14.255422 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" event={"ID":"818171bc-2f19-4297-92b9-a01e361b6387","Type":"ContainerDied","Data":"6e043add29dc758859fda2f15c03cd23c085631f9de710c9a918271702df4c98"} Nov 27 17:27:15 crc kubenswrapper[4792]: I1127 17:27:15.666088 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" Nov 27 17:27:15 crc kubenswrapper[4792]: I1127 17:27:15.783050 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/818171bc-2f19-4297-92b9-a01e361b6387-bundle\") pod \"818171bc-2f19-4297-92b9-a01e361b6387\" (UID: \"818171bc-2f19-4297-92b9-a01e361b6387\") " Nov 27 17:27:15 crc kubenswrapper[4792]: I1127 17:27:15.783118 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/818171bc-2f19-4297-92b9-a01e361b6387-util\") pod \"818171bc-2f19-4297-92b9-a01e361b6387\" (UID: \"818171bc-2f19-4297-92b9-a01e361b6387\") " Nov 27 17:27:15 crc kubenswrapper[4792]: I1127 17:27:15.783232 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h6qx\" (UniqueName: \"kubernetes.io/projected/818171bc-2f19-4297-92b9-a01e361b6387-kube-api-access-5h6qx\") pod \"818171bc-2f19-4297-92b9-a01e361b6387\" (UID: \"818171bc-2f19-4297-92b9-a01e361b6387\") " Nov 27 17:27:15 crc kubenswrapper[4792]: I1127 17:27:15.784505 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/818171bc-2f19-4297-92b9-a01e361b6387-bundle" (OuterVolumeSpecName: "bundle") pod "818171bc-2f19-4297-92b9-a01e361b6387" (UID: "818171bc-2f19-4297-92b9-a01e361b6387"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:27:15 crc kubenswrapper[4792]: I1127 17:27:15.789827 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/818171bc-2f19-4297-92b9-a01e361b6387-kube-api-access-5h6qx" (OuterVolumeSpecName: "kube-api-access-5h6qx") pod "818171bc-2f19-4297-92b9-a01e361b6387" (UID: "818171bc-2f19-4297-92b9-a01e361b6387"). InnerVolumeSpecName "kube-api-access-5h6qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:27:15 crc kubenswrapper[4792]: I1127 17:27:15.814930 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/818171bc-2f19-4297-92b9-a01e361b6387-util" (OuterVolumeSpecName: "util") pod "818171bc-2f19-4297-92b9-a01e361b6387" (UID: "818171bc-2f19-4297-92b9-a01e361b6387"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:27:15 crc kubenswrapper[4792]: I1127 17:27:15.885811 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h6qx\" (UniqueName: \"kubernetes.io/projected/818171bc-2f19-4297-92b9-a01e361b6387-kube-api-access-5h6qx\") on node \"crc\" DevicePath \"\"" Nov 27 17:27:15 crc kubenswrapper[4792]: I1127 17:27:15.886255 4792 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/818171bc-2f19-4297-92b9-a01e361b6387-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:27:15 crc kubenswrapper[4792]: I1127 17:27:15.886401 4792 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/818171bc-2f19-4297-92b9-a01e361b6387-util\") on node \"crc\" DevicePath \"\"" Nov 27 17:27:16 crc kubenswrapper[4792]: I1127 17:27:16.279206 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" event={"ID":"818171bc-2f19-4297-92b9-a01e361b6387","Type":"ContainerDied","Data":"9e0200ee90d1855001e4219bfafae76155b83a6c0a61891b567880af15c9308b"} Nov 27 17:27:16 crc kubenswrapper[4792]: I1127 17:27:16.279256 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e0200ee90d1855001e4219bfafae76155b83a6c0a61891b567880af15c9308b" Nov 27 17:27:16 crc kubenswrapper[4792]: I1127 17:27:16.279363 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k" Nov 27 17:27:21 crc kubenswrapper[4792]: I1127 17:27:21.391850 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-b44dff85c-lpx9d"] Nov 27 17:27:21 crc kubenswrapper[4792]: E1127 17:27:21.392875 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818171bc-2f19-4297-92b9-a01e361b6387" containerName="util" Nov 27 17:27:21 crc kubenswrapper[4792]: I1127 17:27:21.392895 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="818171bc-2f19-4297-92b9-a01e361b6387" containerName="util" Nov 27 17:27:21 crc kubenswrapper[4792]: E1127 17:27:21.392913 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818171bc-2f19-4297-92b9-a01e361b6387" containerName="pull" Nov 27 17:27:21 crc kubenswrapper[4792]: I1127 17:27:21.392922 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="818171bc-2f19-4297-92b9-a01e361b6387" containerName="pull" Nov 27 17:27:21 crc kubenswrapper[4792]: E1127 17:27:21.392944 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818171bc-2f19-4297-92b9-a01e361b6387" containerName="extract" Nov 27 17:27:21 crc kubenswrapper[4792]: I1127 17:27:21.392953 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="818171bc-2f19-4297-92b9-a01e361b6387" containerName="extract" Nov 27 17:27:21 crc kubenswrapper[4792]: I1127 17:27:21.393138 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="818171bc-2f19-4297-92b9-a01e361b6387" containerName="extract" Nov 27 17:27:21 crc kubenswrapper[4792]: I1127 17:27:21.393825 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-b44dff85c-lpx9d" Nov 27 17:27:21 crc kubenswrapper[4792]: I1127 17:27:21.395824 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-54qjl" Nov 27 17:27:21 crc kubenswrapper[4792]: I1127 17:27:21.413885 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-b44dff85c-lpx9d"] Nov 27 17:27:21 crc kubenswrapper[4792]: I1127 17:27:21.481922 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkltv\" (UniqueName: \"kubernetes.io/projected/d8282ec7-1375-403d-b679-d7e372e07f6f-kube-api-access-qkltv\") pod \"openstack-operator-controller-operator-b44dff85c-lpx9d\" (UID: \"d8282ec7-1375-403d-b679-d7e372e07f6f\") " pod="openstack-operators/openstack-operator-controller-operator-b44dff85c-lpx9d" Nov 27 17:27:21 crc kubenswrapper[4792]: I1127 17:27:21.583881 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkltv\" (UniqueName: \"kubernetes.io/projected/d8282ec7-1375-403d-b679-d7e372e07f6f-kube-api-access-qkltv\") pod \"openstack-operator-controller-operator-b44dff85c-lpx9d\" (UID: \"d8282ec7-1375-403d-b679-d7e372e07f6f\") " pod="openstack-operators/openstack-operator-controller-operator-b44dff85c-lpx9d" Nov 27 17:27:21 crc kubenswrapper[4792]: I1127 17:27:21.621584 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkltv\" (UniqueName: \"kubernetes.io/projected/d8282ec7-1375-403d-b679-d7e372e07f6f-kube-api-access-qkltv\") pod \"openstack-operator-controller-operator-b44dff85c-lpx9d\" (UID: \"d8282ec7-1375-403d-b679-d7e372e07f6f\") " pod="openstack-operators/openstack-operator-controller-operator-b44dff85c-lpx9d" Nov 27 17:27:21 crc kubenswrapper[4792]: I1127 17:27:21.718012 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-b44dff85c-lpx9d" Nov 27 17:27:22 crc kubenswrapper[4792]: I1127 17:27:22.176579 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-b44dff85c-lpx9d"] Nov 27 17:27:22 crc kubenswrapper[4792]: I1127 17:27:22.361699 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-b44dff85c-lpx9d" event={"ID":"d8282ec7-1375-403d-b679-d7e372e07f6f","Type":"ContainerStarted","Data":"c49d6d86b4e65f7b3fec168f2d4af9875bd377a74b5ba055ed557cefc1460df8"} Nov 27 17:27:26 crc kubenswrapper[4792]: I1127 17:27:26.426042 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-b44dff85c-lpx9d" event={"ID":"d8282ec7-1375-403d-b679-d7e372e07f6f","Type":"ContainerStarted","Data":"7cd42203fbc635398aa1663174fb47d0c077b7fee469fba2ad7984a2fb865113"} Nov 27 17:27:26 crc kubenswrapper[4792]: I1127 17:27:26.426722 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-b44dff85c-lpx9d" Nov 27 17:27:26 crc kubenswrapper[4792]: I1127 17:27:26.462616 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-b44dff85c-lpx9d" podStartSLOduration=1.876260572 podStartE2EDuration="5.462597997s" podCreationTimestamp="2025-11-27 17:27:21 +0000 UTC" firstStartedPulling="2025-11-27 17:27:22.187603253 +0000 UTC m=+1064.530429571" lastFinishedPulling="2025-11-27 17:27:25.773940678 +0000 UTC m=+1068.116766996" observedRunningTime="2025-11-27 17:27:26.459692605 +0000 UTC m=+1068.802518933" watchObservedRunningTime="2025-11-27 17:27:26.462597997 +0000 UTC m=+1068.805424315" Nov 27 17:27:31 crc kubenswrapper[4792]: I1127 17:27:31.721463 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-b44dff85c-lpx9d" Nov 27 17:27:38 crc kubenswrapper[4792]: I1127 17:27:38.290208 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:27:38 crc kubenswrapper[4792]: I1127 17:27:38.290800 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.482055 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8njms"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.483731 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8njms" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.489057 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-z7fm9"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.489239 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lj6xt" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.490394 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-z7fm9" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.492586 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-5hsjv" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.496508 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-z7fm9"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.506552 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8njms"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.541494 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-xwttv"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.542839 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-xwttv" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.544329 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-ks2gb" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.573073 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-hkks9"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.574348 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-hkks9" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.578091 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-kt6t2" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.587534 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-xwttv"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.621274 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-hkks9"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.640881 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-gqmrh"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.643480 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-gqmrh" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.648374 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-qs7wq"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.652206 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mwdfn" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.657129 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qs7wq" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.663152 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-sb2dg" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.674584 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf6j2\" (UniqueName: \"kubernetes.io/projected/db57e7fa-0523-4a09-91a0-371fe08e5052-kube-api-access-vf6j2\") pod \"cinder-operator-controller-manager-6b7f75547b-z7fm9\" (UID: \"db57e7fa-0523-4a09-91a0-371fe08e5052\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-z7fm9" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.675042 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcn6z\" (UniqueName: \"kubernetes.io/projected/ad88e4ad-7c33-4dac-85ed-54e7f69d8625-kube-api-access-dcn6z\") pod \"glance-operator-controller-manager-589cbd6b5b-hkks9\" (UID: \"ad88e4ad-7c33-4dac-85ed-54e7f69d8625\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-hkks9" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.675510 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f95fz\" (UniqueName: \"kubernetes.io/projected/880e84df-6b95-4c8d-8b4c-146f26d99098-kube-api-access-f95fz\") pod \"barbican-operator-controller-manager-7b64f4fb85-8njms\" (UID: \"880e84df-6b95-4c8d-8b4c-146f26d99098\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8njms" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.675664 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk77c\" (UniqueName: \"kubernetes.io/projected/f6b3f84c-a691-43dc-b9bc-f5bd2fcafb79-kube-api-access-zk77c\") pod \"designate-operator-controller-manager-955677c94-xwttv\" (UID: \"f6b3f84c-a691-43dc-b9bc-f5bd2fcafb79\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-xwttv" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.694904 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-gqmrh"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.712138 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-qs7wq"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.728853 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-zklgd"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.758747 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.761394 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.762282 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-p4qhv" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.783580 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf6j2\" (UniqueName: \"kubernetes.io/projected/db57e7fa-0523-4a09-91a0-371fe08e5052-kube-api-access-vf6j2\") pod \"cinder-operator-controller-manager-6b7f75547b-z7fm9\" (UID: \"db57e7fa-0523-4a09-91a0-371fe08e5052\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-z7fm9" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.783637 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcn6z\" (UniqueName: \"kubernetes.io/projected/ad88e4ad-7c33-4dac-85ed-54e7f69d8625-kube-api-access-dcn6z\") pod \"glance-operator-controller-manager-589cbd6b5b-hkks9\" (UID: \"ad88e4ad-7c33-4dac-85ed-54e7f69d8625\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-hkks9" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.783799 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfs5h\" (UniqueName: \"kubernetes.io/projected/d29cd75e-9782-4f90-b9cf-95329e101cbb-kube-api-access-hfs5h\") pod \"horizon-operator-controller-manager-5d494799bf-qs7wq\" (UID: \"d29cd75e-9782-4f90-b9cf-95329e101cbb\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qs7wq" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.783867 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sthmm\" (UniqueName: \"kubernetes.io/projected/04aba733-246c-4169-b91d-c7708aea6a71-kube-api-access-sthmm\") pod \"heat-operator-controller-manager-5b77f656f-gqmrh\" (UID: \"04aba733-246c-4169-b91d-c7708aea6a71\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-gqmrh" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.783903 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f95fz\" (UniqueName: \"kubernetes.io/projected/880e84df-6b95-4c8d-8b4c-146f26d99098-kube-api-access-f95fz\") pod \"barbican-operator-controller-manager-7b64f4fb85-8njms\" (UID: \"880e84df-6b95-4c8d-8b4c-146f26d99098\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8njms" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.783931 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk77c\" (UniqueName: \"kubernetes.io/projected/f6b3f84c-a691-43dc-b9bc-f5bd2fcafb79-kube-api-access-zk77c\") pod \"designate-operator-controller-manager-955677c94-xwttv\" (UID: \"f6b3f84c-a691-43dc-b9bc-f5bd2fcafb79\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-xwttv" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.808079 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-7gmv4"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.810251 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-7gmv4" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.812324 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk77c\" (UniqueName: \"kubernetes.io/projected/f6b3f84c-a691-43dc-b9bc-f5bd2fcafb79-kube-api-access-zk77c\") pod \"designate-operator-controller-manager-955677c94-xwttv\" (UID: \"f6b3f84c-a691-43dc-b9bc-f5bd2fcafb79\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-xwttv" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.812985 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-wpk7t" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.813539 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcn6z\" (UniqueName: \"kubernetes.io/projected/ad88e4ad-7c33-4dac-85ed-54e7f69d8625-kube-api-access-dcn6z\") pod \"glance-operator-controller-manager-589cbd6b5b-hkks9\" (UID: \"ad88e4ad-7c33-4dac-85ed-54e7f69d8625\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-hkks9" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.815077 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf6j2\" (UniqueName: \"kubernetes.io/projected/db57e7fa-0523-4a09-91a0-371fe08e5052-kube-api-access-vf6j2\") pod \"cinder-operator-controller-manager-6b7f75547b-z7fm9\" (UID: \"db57e7fa-0523-4a09-91a0-371fe08e5052\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-z7fm9" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.817083 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-z7fm9" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.822795 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-zklgd"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.841343 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f95fz\" (UniqueName: \"kubernetes.io/projected/880e84df-6b95-4c8d-8b4c-146f26d99098-kube-api-access-f95fz\") pod \"barbican-operator-controller-manager-7b64f4fb85-8njms\" (UID: \"880e84df-6b95-4c8d-8b4c-146f26d99098\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8njms" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.845701 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-7gmv4"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.863321 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-v7kfm"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.866852 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-v7kfm" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.870769 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-thrh7" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.879431 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-xwttv" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.882709 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-dnsbx"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.884936 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-dnsbx" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.885916 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfs5h\" (UniqueName: \"kubernetes.io/projected/d29cd75e-9782-4f90-b9cf-95329e101cbb-kube-api-access-hfs5h\") pod \"horizon-operator-controller-manager-5d494799bf-qs7wq\" (UID: \"d29cd75e-9782-4f90-b9cf-95329e101cbb\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qs7wq" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.885971 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-448wp\" (UniqueName: \"kubernetes.io/projected/94d0c824-194b-4d52-ba80-1cc08301a196-kube-api-access-448wp\") pod \"infra-operator-controller-manager-57548d458d-zklgd\" (UID: \"94d0c824-194b-4d52-ba80-1cc08301a196\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.886038 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sthmm\" (UniqueName: \"kubernetes.io/projected/04aba733-246c-4169-b91d-c7708aea6a71-kube-api-access-sthmm\") pod \"heat-operator-controller-manager-5b77f656f-gqmrh\" (UID: \"04aba733-246c-4169-b91d-c7708aea6a71\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-gqmrh" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.886077 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert\") pod \"infra-operator-controller-manager-57548d458d-zklgd\" (UID: \"94d0c824-194b-4d52-ba80-1cc08301a196\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.891566 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rrmd9" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.897525 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-v7kfm"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.906069 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-bmtbl"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.907933 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-bmtbl" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.910227 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-hkks9" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.912950 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xrvkh" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.913540 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sthmm\" (UniqueName: \"kubernetes.io/projected/04aba733-246c-4169-b91d-c7708aea6a71-kube-api-access-sthmm\") pod \"heat-operator-controller-manager-5b77f656f-gqmrh\" (UID: \"04aba733-246c-4169-b91d-c7708aea6a71\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-gqmrh" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.917195 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-lp4kf"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.917245 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfs5h\" (UniqueName: \"kubernetes.io/projected/d29cd75e-9782-4f90-b9cf-95329e101cbb-kube-api-access-hfs5h\") pod \"horizon-operator-controller-manager-5d494799bf-qs7wq\" (UID: \"d29cd75e-9782-4f90-b9cf-95329e101cbb\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qs7wq" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.918345 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-lp4kf" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.920071 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-w9986" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.934196 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-dnsbx"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.955813 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-bmtbl"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.978965 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-lp4kf"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.983306 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-gqmrh" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.990242 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert\") pod \"infra-operator-controller-manager-57548d458d-zklgd\" (UID: \"94d0c824-194b-4d52-ba80-1cc08301a196\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.990413 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qxbg\" (UniqueName: \"kubernetes.io/projected/5e49917e-d729-4661-a604-a603f9a8cca7-kube-api-access-5qxbg\") pod \"ironic-operator-controller-manager-67cb4dc6d4-7gmv4\" (UID: \"5e49917e-d729-4661-a604-a603f9a8cca7\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-7gmv4" Nov 27 17:27:49 crc kubenswrapper[4792]: E1127 17:27:49.990443 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 27 17:27:49 crc kubenswrapper[4792]: E1127 17:27:49.990510 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert podName:94d0c824-194b-4d52-ba80-1cc08301a196 nodeName:}" failed. No retries permitted until 2025-11-27 17:27:50.490492564 +0000 UTC m=+1092.833318882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert") pod "infra-operator-controller-manager-57548d458d-zklgd" (UID: "94d0c824-194b-4d52-ba80-1cc08301a196") : secret "infra-operator-webhook-server-cert" not found Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.990532 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9brx4\" (UniqueName: \"kubernetes.io/projected/fd4b3618-80a1-4d23-8faa-57c206b08cf6-kube-api-access-9brx4\") pod \"keystone-operator-controller-manager-7b4567c7cf-v7kfm\" (UID: \"fd4b3618-80a1-4d23-8faa-57c206b08cf6\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-v7kfm" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.990571 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-448wp\" (UniqueName: \"kubernetes.io/projected/94d0c824-194b-4d52-ba80-1cc08301a196-kube-api-access-448wp\") pod \"infra-operator-controller-manager-57548d458d-zklgd\" (UID: \"94d0c824-194b-4d52-ba80-1cc08301a196\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.990684 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxfm4\" (UniqueName: \"kubernetes.io/projected/652cb29e-91a9-433f-9002-c850a78cb8a4-kube-api-access-pxfm4\") pod \"manila-operator-controller-manager-5d499bf58b-dnsbx\" (UID: \"652cb29e-91a9-433f-9002-c850a78cb8a4\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-dnsbx" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.994121 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-2lfqp"] Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.997434 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2lfqp" Nov 27 17:27:49 crc kubenswrapper[4792]: I1127 17:27:49.999043 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qs7wq" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.003216 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-7p25p" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.011962 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-2lfqp"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.023204 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-9qhqv"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.025172 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-9qhqv" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.028476 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-2zjbs" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.031704 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-9qhqv"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.038368 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-448wp\" (UniqueName: \"kubernetes.io/projected/94d0c824-194b-4d52-ba80-1cc08301a196-kube-api-access-448wp\") pod \"infra-operator-controller-manager-57548d458d-zklgd\" (UID: \"94d0c824-194b-4d52-ba80-1cc08301a196\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.045377 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-z5rhr"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.048017 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-z5rhr" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.052097 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-wnv46" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.058783 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-fqb9p"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.060457 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-fqb9p" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.063268 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-tmf25" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.065179 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-z5rhr"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.078231 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.094036 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9brx4\" (UniqueName: \"kubernetes.io/projected/fd4b3618-80a1-4d23-8faa-57c206b08cf6-kube-api-access-9brx4\") pod \"keystone-operator-controller-manager-7b4567c7cf-v7kfm\" (UID: \"fd4b3618-80a1-4d23-8faa-57c206b08cf6\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-v7kfm" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.094182 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc8qj\" (UniqueName: \"kubernetes.io/projected/afd4a5dc-d971-4eeb-8272-0ead1e9b4274-kube-api-access-mc8qj\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-lp4kf\" (UID: \"afd4a5dc-d971-4eeb-8272-0ead1e9b4274\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-lp4kf" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.094319 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhskr\" (UniqueName: \"kubernetes.io/projected/20dd117f-6517-4b59-855d-a0f9d08409a2-kube-api-access-qhskr\") pod \"neutron-operator-controller-manager-6fdcddb789-bmtbl\" (UID: \"20dd117f-6517-4b59-855d-a0f9d08409a2\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-bmtbl" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.094378 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxfm4\" (UniqueName: \"kubernetes.io/projected/652cb29e-91a9-433f-9002-c850a78cb8a4-kube-api-access-pxfm4\") pod \"manila-operator-controller-manager-5d499bf58b-dnsbx\" (UID: \"652cb29e-91a9-433f-9002-c850a78cb8a4\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-dnsbx" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.094590 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.094734 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kfjz\" (UniqueName: \"kubernetes.io/projected/2fc2a1fd-6c7e-4d26-801b-5cac891fba51-kube-api-access-2kfjz\") pod \"nova-operator-controller-manager-79556f57fc-2lfqp\" (UID: \"2fc2a1fd-6c7e-4d26-801b-5cac891fba51\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2lfqp" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.094795 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qxbg\" (UniqueName: \"kubernetes.io/projected/5e49917e-d729-4661-a604-a603f9a8cca7-kube-api-access-5qxbg\") pod \"ironic-operator-controller-manager-67cb4dc6d4-7gmv4\" (UID: \"5e49917e-d729-4661-a604-a603f9a8cca7\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-7gmv4" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.098908 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.099052 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-7bp6j" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.104911 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8njms" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.108526 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-fqb9p"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.121343 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxfm4\" (UniqueName: \"kubernetes.io/projected/652cb29e-91a9-433f-9002-c850a78cb8a4-kube-api-access-pxfm4\") pod \"manila-operator-controller-manager-5d499bf58b-dnsbx\" (UID: \"652cb29e-91a9-433f-9002-c850a78cb8a4\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-dnsbx" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.131189 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qxbg\" (UniqueName: \"kubernetes.io/projected/5e49917e-d729-4661-a604-a603f9a8cca7-kube-api-access-5qxbg\") pod \"ironic-operator-controller-manager-67cb4dc6d4-7gmv4\" (UID: \"5e49917e-d729-4661-a604-a603f9a8cca7\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-7gmv4" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.153961 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-dnsbx" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.158156 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9brx4\" (UniqueName: \"kubernetes.io/projected/fd4b3618-80a1-4d23-8faa-57c206b08cf6-kube-api-access-9brx4\") pod \"keystone-operator-controller-manager-7b4567c7cf-v7kfm\" (UID: \"fd4b3618-80a1-4d23-8faa-57c206b08cf6\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-v7kfm" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.169006 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-75zc9"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.172962 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-75zc9" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.175139 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-29fsn" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.197846 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t76fc\" (UniqueName: \"kubernetes.io/projected/cf051bf3-d415-40eb-8071-8f0509377c34-kube-api-access-t76fc\") pod \"placement-operator-controller-manager-57988cc5b5-fqb9p\" (UID: \"cf051bf3-d415-40eb-8071-8f0509377c34\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-fqb9p" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.200867 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7z49\" (UniqueName: \"kubernetes.io/projected/969e1197-2aaa-42c9-b56e-7af3ef24e205-kube-api-access-z7z49\") pod \"octavia-operator-controller-manager-64cdc6ff96-9qhqv\" (UID: \"969e1197-2aaa-42c9-b56e-7af3ef24e205\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-9qhqv" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.200937 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc8qj\" (UniqueName: \"kubernetes.io/projected/afd4a5dc-d971-4eeb-8272-0ead1e9b4274-kube-api-access-mc8qj\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-lp4kf\" (UID: \"afd4a5dc-d971-4eeb-8272-0ead1e9b4274\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-lp4kf" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.200987 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhskr\" (UniqueName: \"kubernetes.io/projected/20dd117f-6517-4b59-855d-a0f9d08409a2-kube-api-access-qhskr\") pod \"neutron-operator-controller-manager-6fdcddb789-bmtbl\" (UID: \"20dd117f-6517-4b59-855d-a0f9d08409a2\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-bmtbl" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.201212 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kfjz\" (UniqueName: \"kubernetes.io/projected/2fc2a1fd-6c7e-4d26-801b-5cac891fba51-kube-api-access-2kfjz\") pod \"nova-operator-controller-manager-79556f57fc-2lfqp\" (UID: \"2fc2a1fd-6c7e-4d26-801b-5cac891fba51\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2lfqp" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.201333 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2\" (UID: \"55002036-a4a7-469c-93be-e4483f455a4c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.201386 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkqxr\" (UniqueName: \"kubernetes.io/projected/57052bd6-e7c2-4ea0-bc6e-839ed4803541-kube-api-access-pkqxr\") pod \"ovn-operator-controller-manager-56897c768d-z5rhr\" (UID: \"57052bd6-e7c2-4ea0-bc6e-839ed4803541\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-z5rhr" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.201433 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfhzr\" (UniqueName: \"kubernetes.io/projected/55002036-a4a7-469c-93be-e4483f455a4c-kube-api-access-nfhzr\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2\" (UID: \"55002036-a4a7-469c-93be-e4483f455a4c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.246761 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.253411 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kfjz\" (UniqueName: \"kubernetes.io/projected/2fc2a1fd-6c7e-4d26-801b-5cac891fba51-kube-api-access-2kfjz\") pod \"nova-operator-controller-manager-79556f57fc-2lfqp\" (UID: \"2fc2a1fd-6c7e-4d26-801b-5cac891fba51\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2lfqp" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.260104 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc8qj\" (UniqueName: \"kubernetes.io/projected/afd4a5dc-d971-4eeb-8272-0ead1e9b4274-kube-api-access-mc8qj\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-lp4kf\" (UID: \"afd4a5dc-d971-4eeb-8272-0ead1e9b4274\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-lp4kf" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.271733 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhskr\" (UniqueName: \"kubernetes.io/projected/20dd117f-6517-4b59-855d-a0f9d08409a2-kube-api-access-qhskr\") pod \"neutron-operator-controller-manager-6fdcddb789-bmtbl\" (UID: \"20dd117f-6517-4b59-855d-a0f9d08409a2\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-bmtbl" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.271844 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2lfqp" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.305308 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t76fc\" (UniqueName: \"kubernetes.io/projected/cf051bf3-d415-40eb-8071-8f0509377c34-kube-api-access-t76fc\") pod \"placement-operator-controller-manager-57988cc5b5-fqb9p\" (UID: \"cf051bf3-d415-40eb-8071-8f0509377c34\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-fqb9p" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.305368 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7z49\" (UniqueName: \"kubernetes.io/projected/969e1197-2aaa-42c9-b56e-7af3ef24e205-kube-api-access-z7z49\") pod \"octavia-operator-controller-manager-64cdc6ff96-9qhqv\" (UID: \"969e1197-2aaa-42c9-b56e-7af3ef24e205\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-9qhqv" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.305485 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjsxq\" (UniqueName: \"kubernetes.io/projected/074f9cbe-fb30-4e1f-9156-ccc5100dcd3b-kube-api-access-tjsxq\") pod \"swift-operator-controller-manager-d77b94747-75zc9\" (UID: \"074f9cbe-fb30-4e1f-9156-ccc5100dcd3b\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-75zc9" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.305600 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2\" (UID: \"55002036-a4a7-469c-93be-e4483f455a4c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.305631 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkqxr\" (UniqueName: \"kubernetes.io/projected/57052bd6-e7c2-4ea0-bc6e-839ed4803541-kube-api-access-pkqxr\") pod \"ovn-operator-controller-manager-56897c768d-z5rhr\" (UID: \"57052bd6-e7c2-4ea0-bc6e-839ed4803541\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-z5rhr" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.325016 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfhzr\" (UniqueName: \"kubernetes.io/projected/55002036-a4a7-469c-93be-e4483f455a4c-kube-api-access-nfhzr\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2\" (UID: \"55002036-a4a7-469c-93be-e4483f455a4c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" Nov 27 17:27:50 crc kubenswrapper[4792]: E1127 17:27:50.325793 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 27 17:27:50 crc kubenswrapper[4792]: E1127 17:27:50.325874 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert podName:55002036-a4a7-469c-93be-e4483f455a4c nodeName:}" failed. No retries permitted until 2025-11-27 17:27:50.825856032 +0000 UTC m=+1093.168682350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" (UID: "55002036-a4a7-469c-93be-e4483f455a4c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.329543 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-7gmv4" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.358085 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-v7kfm" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.362176 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-75zc9"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.378692 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfhzr\" (UniqueName: \"kubernetes.io/projected/55002036-a4a7-469c-93be-e4483f455a4c-kube-api-access-nfhzr\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2\" (UID: \"55002036-a4a7-469c-93be-e4483f455a4c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.403364 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7z49\" (UniqueName: \"kubernetes.io/projected/969e1197-2aaa-42c9-b56e-7af3ef24e205-kube-api-access-z7z49\") pod \"octavia-operator-controller-manager-64cdc6ff96-9qhqv\" (UID: \"969e1197-2aaa-42c9-b56e-7af3ef24e205\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-9qhqv" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.403601 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkqxr\" (UniqueName: \"kubernetes.io/projected/57052bd6-e7c2-4ea0-bc6e-839ed4803541-kube-api-access-pkqxr\") pod \"ovn-operator-controller-manager-56897c768d-z5rhr\" (UID: \"57052bd6-e7c2-4ea0-bc6e-839ed4803541\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-z5rhr" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.411017 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t76fc\" (UniqueName: \"kubernetes.io/projected/cf051bf3-d415-40eb-8071-8f0509377c34-kube-api-access-t76fc\") pod \"placement-operator-controller-manager-57988cc5b5-fqb9p\" (UID: \"cf051bf3-d415-40eb-8071-8f0509377c34\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-fqb9p" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.426985 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjsxq\" (UniqueName: \"kubernetes.io/projected/074f9cbe-fb30-4e1f-9156-ccc5100dcd3b-kube-api-access-tjsxq\") pod \"swift-operator-controller-manager-d77b94747-75zc9\" (UID: \"074f9cbe-fb30-4e1f-9156-ccc5100dcd3b\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-75zc9" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.484870 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjsxq\" (UniqueName: \"kubernetes.io/projected/074f9cbe-fb30-4e1f-9156-ccc5100dcd3b-kube-api-access-tjsxq\") pod \"swift-operator-controller-manager-d77b94747-75zc9\" (UID: \"074f9cbe-fb30-4e1f-9156-ccc5100dcd3b\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-75zc9" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.485864 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.501327 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.510752 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-bmtbl" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.516099 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-lp4kf" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.520947 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-nj5sp" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.528582 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.539781 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert\") pod \"infra-operator-controller-manager-57548d458d-zklgd\" (UID: \"94d0c824-194b-4d52-ba80-1cc08301a196\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.540014 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5brdv\" (UniqueName: \"kubernetes.io/projected/4193b9b8-da59-42cf-94b2-a327608c59a6-kube-api-access-5brdv\") pod \"telemetry-operator-controller-manager-ff79b6df5-jrwv2\" (UID: \"4193b9b8-da59-42cf-94b2-a327608c59a6\") " pod="openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2" Nov 27 17:27:50 crc kubenswrapper[4792]: E1127 17:27:50.540018 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 27 17:27:50 crc kubenswrapper[4792]: E1127 17:27:50.540252 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert podName:94d0c824-194b-4d52-ba80-1cc08301a196 nodeName:}" failed. No retries permitted until 2025-11-27 17:27:51.540227726 +0000 UTC m=+1093.883054034 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert") pod "infra-operator-controller-manager-57548d458d-zklgd" (UID: "94d0c824-194b-4d52-ba80-1cc08301a196") : secret "infra-operator-webhook-server-cert" not found Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.542727 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-fqb9p" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.555894 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-75zc9" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.590943 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-8kj9s"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.592455 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-8kj9s" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.594943 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-tbssk" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.606298 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-8kj9s"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.638990 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-9qhqv" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.641771 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8m59\" (UniqueName: \"kubernetes.io/projected/de56fbe3-d4c6-430f-8b94-5136fbf4a79c-kube-api-access-m8m59\") pod \"test-operator-controller-manager-5cd6c7f4c8-8kj9s\" (UID: \"de56fbe3-d4c6-430f-8b94-5136fbf4a79c\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-8kj9s" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.641816 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5brdv\" (UniqueName: \"kubernetes.io/projected/4193b9b8-da59-42cf-94b2-a327608c59a6-kube-api-access-5brdv\") pod \"telemetry-operator-controller-manager-ff79b6df5-jrwv2\" (UID: \"4193b9b8-da59-42cf-94b2-a327608c59a6\") " pod="openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.646373 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.649059 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.653912 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-rb2hw" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.676451 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.677407 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5brdv\" (UniqueName: \"kubernetes.io/projected/4193b9b8-da59-42cf-94b2-a327608c59a6-kube-api-access-5brdv\") pod \"telemetry-operator-controller-manager-ff79b6df5-jrwv2\" (UID: \"4193b9b8-da59-42cf-94b2-a327608c59a6\") " pod="openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.696264 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-z5rhr" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.706929 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-z7fm9" event={"ID":"db57e7fa-0523-4a09-91a0-371fe08e5052","Type":"ContainerStarted","Data":"5472b47f7118b49ffe0c5504e4a273fa9338d831e0d65058227f72d6ec477a5c"} Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.706970 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.707981 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.710341 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.710493 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.711091 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-5c6fm" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.717757 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.724452 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q87xm"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.725789 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q87xm" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.729495 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-tp6nz" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.740088 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q87xm"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.746521 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8m59\" (UniqueName: \"kubernetes.io/projected/de56fbe3-d4c6-430f-8b94-5136fbf4a79c-kube-api-access-m8m59\") pod \"test-operator-controller-manager-5cd6c7f4c8-8kj9s\" (UID: \"de56fbe3-d4c6-430f-8b94-5136fbf4a79c\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-8kj9s" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.746669 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwnsn\" (UniqueName: \"kubernetes.io/projected/6f57d750-e016-4d78-bdbe-b9b1c5a21787-kube-api-access-zwnsn\") pod \"watcher-operator-controller-manager-656dcb59d4-bvh8l\" (UID: \"6f57d750-e016-4d78-bdbe-b9b1c5a21787\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.746698 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.746732 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.746777 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggl4n\" (UniqueName: \"kubernetes.io/projected/f1ef7f3c-052e-45e2-a51a-5d114d634c12-kube-api-access-ggl4n\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.752971 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-z7fm9"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.770300 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8m59\" (UniqueName: \"kubernetes.io/projected/de56fbe3-d4c6-430f-8b94-5136fbf4a79c-kube-api-access-m8m59\") pod \"test-operator-controller-manager-5cd6c7f4c8-8kj9s\" (UID: \"de56fbe3-d4c6-430f-8b94-5136fbf4a79c\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-8kj9s" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.804718 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-xwttv"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.848258 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgpnl\" (UniqueName: \"kubernetes.io/projected/4d308a9c-7874-457f-a97f-4bb784a11783-kube-api-access-sgpnl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-q87xm\" (UID: \"4d308a9c-7874-457f-a97f-4bb784a11783\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q87xm" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.848686 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggl4n\" (UniqueName: \"kubernetes.io/projected/f1ef7f3c-052e-45e2-a51a-5d114d634c12-kube-api-access-ggl4n\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.849032 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2\" (UID: \"55002036-a4a7-469c-93be-e4483f455a4c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" Nov 27 17:27:50 crc kubenswrapper[4792]: E1127 17:27:50.849241 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.849273 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwnsn\" (UniqueName: \"kubernetes.io/projected/6f57d750-e016-4d78-bdbe-b9b1c5a21787-kube-api-access-zwnsn\") pod \"watcher-operator-controller-manager-656dcb59d4-bvh8l\" (UID: \"6f57d750-e016-4d78-bdbe-b9b1c5a21787\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l" Nov 27 17:27:50 crc kubenswrapper[4792]: E1127 17:27:50.849314 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert podName:55002036-a4a7-469c-93be-e4483f455a4c nodeName:}" failed. No retries permitted until 2025-11-27 17:27:51.849295364 +0000 UTC m=+1094.192121682 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" (UID: "55002036-a4a7-469c-93be-e4483f455a4c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.849336 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.849370 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:50 crc kubenswrapper[4792]: E1127 17:27:50.849463 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 27 17:27:50 crc kubenswrapper[4792]: E1127 17:27:50.849517 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs podName:f1ef7f3c-052e-45e2-a51a-5d114d634c12 nodeName:}" failed. No retries permitted until 2025-11-27 17:27:51.349498209 +0000 UTC m=+1093.692324537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs") pod "openstack-operator-controller-manager-6644d5b8df-w6zdt" (UID: "f1ef7f3c-052e-45e2-a51a-5d114d634c12") : secret "metrics-server-cert" not found Nov 27 17:27:50 crc kubenswrapper[4792]: E1127 17:27:50.849680 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 27 17:27:50 crc kubenswrapper[4792]: E1127 17:27:50.849749 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs podName:f1ef7f3c-052e-45e2-a51a-5d114d634c12 nodeName:}" failed. No retries permitted until 2025-11-27 17:27:51.349718454 +0000 UTC m=+1093.692544762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs") pod "openstack-operator-controller-manager-6644d5b8df-w6zdt" (UID: "f1ef7f3c-052e-45e2-a51a-5d114d634c12") : secret "webhook-server-cert" not found Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.877224 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.877440 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwnsn\" (UniqueName: \"kubernetes.io/projected/6f57d750-e016-4d78-bdbe-b9b1c5a21787-kube-api-access-zwnsn\") pod \"watcher-operator-controller-manager-656dcb59d4-bvh8l\" (UID: \"6f57d750-e016-4d78-bdbe-b9b1c5a21787\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l" Nov 27 17:27:50 crc kubenswrapper[4792]: W1127 17:27:50.878610 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6b3f84c_a691_43dc_b9bc_f5bd2fcafb79.slice/crio-49b6a2e8077f1c5704137af758e6e90e239ab873f830382d134b8e7c3447b938 WatchSource:0}: Error finding container 49b6a2e8077f1c5704137af758e6e90e239ab873f830382d134b8e7c3447b938: Status 404 returned error can't find the container with id 49b6a2e8077f1c5704137af758e6e90e239ab873f830382d134b8e7c3447b938 Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.885974 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggl4n\" (UniqueName: \"kubernetes.io/projected/f1ef7f3c-052e-45e2-a51a-5d114d634c12-kube-api-access-ggl4n\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.906449 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-8kj9s" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.953893 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgpnl\" (UniqueName: \"kubernetes.io/projected/4d308a9c-7874-457f-a97f-4bb784a11783-kube-api-access-sgpnl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-q87xm\" (UID: \"4d308a9c-7874-457f-a97f-4bb784a11783\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q87xm" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.972591 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l" Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.983531 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-hkks9"] Nov 27 17:27:50 crc kubenswrapper[4792]: I1127 17:27:50.987488 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgpnl\" (UniqueName: \"kubernetes.io/projected/4d308a9c-7874-457f-a97f-4bb784a11783-kube-api-access-sgpnl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-q87xm\" (UID: \"4d308a9c-7874-457f-a97f-4bb784a11783\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q87xm" Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.002884 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-qs7wq"] Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.049610 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q87xm" Nov 27 17:27:51 crc kubenswrapper[4792]: W1127 17:27:51.066203 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad88e4ad_7c33_4dac_85ed_54e7f69d8625.slice/crio-e069760b083dc2dc7085955b19ac7265f906ac913ab37c8b706b7ec3ecd67e09 WatchSource:0}: Error finding container e069760b083dc2dc7085955b19ac7265f906ac913ab37c8b706b7ec3ecd67e09: Status 404 returned error can't find the container with id e069760b083dc2dc7085955b19ac7265f906ac913ab37c8b706b7ec3ecd67e09 Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.362126 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.362184 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:51 crc kubenswrapper[4792]: E1127 17:27:51.362551 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 27 17:27:51 crc kubenswrapper[4792]: E1127 17:27:51.362610 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs podName:f1ef7f3c-052e-45e2-a51a-5d114d634c12 nodeName:}" failed. No retries permitted until 2025-11-27 17:27:52.362591204 +0000 UTC m=+1094.705417522 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs") pod "openstack-operator-controller-manager-6644d5b8df-w6zdt" (UID: "f1ef7f3c-052e-45e2-a51a-5d114d634c12") : secret "webhook-server-cert" not found Nov 27 17:27:51 crc kubenswrapper[4792]: E1127 17:27:51.362825 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 27 17:27:51 crc kubenswrapper[4792]: E1127 17:27:51.362902 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs podName:f1ef7f3c-052e-45e2-a51a-5d114d634c12 nodeName:}" failed. No retries permitted until 2025-11-27 17:27:52.362882931 +0000 UTC m=+1094.705709249 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs") pod "openstack-operator-controller-manager-6644d5b8df-w6zdt" (UID: "f1ef7f3c-052e-45e2-a51a-5d114d634c12") : secret "metrics-server-cert" not found Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.374716 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-dnsbx"] Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.389266 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-2lfqp"] Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.407387 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-7gmv4"] Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.436155 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-gqmrh"] Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.452811 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8njms"] Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.506810 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-bmtbl"] Nov 27 17:27:51 crc kubenswrapper[4792]: W1127 17:27:51.506830 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20dd117f_6517_4b59_855d_a0f9d08409a2.slice/crio-7387d6258e1f74c90d90047a022ad4a05638c2538ba3df144a327d66561f60b6 WatchSource:0}: Error finding container 7387d6258e1f74c90d90047a022ad4a05638c2538ba3df144a327d66561f60b6: Status 404 returned error can't find the container with id 7387d6258e1f74c90d90047a022ad4a05638c2538ba3df144a327d66561f60b6 Nov 27 17:27:51 crc kubenswrapper[4792]: W1127 17:27:51.508924 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd4b3618_80a1_4d23_8faa_57c206b08cf6.slice/crio-b64de82f2eb6345c2a011ea73c856f9d4a1b4a14f7378b5029adfdec97696959 WatchSource:0}: Error finding container b64de82f2eb6345c2a011ea73c856f9d4a1b4a14f7378b5029adfdec97696959: Status 404 returned error can't find the container with id b64de82f2eb6345c2a011ea73c856f9d4a1b4a14f7378b5029adfdec97696959 Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.518950 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-v7kfm"] Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.565346 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert\") pod \"infra-operator-controller-manager-57548d458d-zklgd\" (UID: \"94d0c824-194b-4d52-ba80-1cc08301a196\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" Nov 27 17:27:51 crc kubenswrapper[4792]: E1127 17:27:51.565548 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 27 17:27:51 crc kubenswrapper[4792]: E1127 17:27:51.565624 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert podName:94d0c824-194b-4d52-ba80-1cc08301a196 nodeName:}" failed. No retries permitted until 2025-11-27 17:27:53.565605917 +0000 UTC m=+1095.908432235 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert") pod "infra-operator-controller-manager-57548d458d-zklgd" (UID: "94d0c824-194b-4d52-ba80-1cc08301a196") : secret "infra-operator-webhook-server-cert" not found Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.702104 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-xwttv" event={"ID":"f6b3f84c-a691-43dc-b9bc-f5bd2fcafb79","Type":"ContainerStarted","Data":"49b6a2e8077f1c5704137af758e6e90e239ab873f830382d134b8e7c3447b938"} Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.703561 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2lfqp" event={"ID":"2fc2a1fd-6c7e-4d26-801b-5cac891fba51","Type":"ContainerStarted","Data":"98dc840362bdd5d13bc6c69688d05f02ff14169e5fc84fc675aee4961a1af133"} Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.704637 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-7gmv4" event={"ID":"5e49917e-d729-4661-a604-a603f9a8cca7","Type":"ContainerStarted","Data":"9003a4266750544b3759454642506bcfeb2f8c28d40a77d03327d5d22ecb0152"} Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.706586 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-v7kfm" event={"ID":"fd4b3618-80a1-4d23-8faa-57c206b08cf6","Type":"ContainerStarted","Data":"b64de82f2eb6345c2a011ea73c856f9d4a1b4a14f7378b5029adfdec97696959"} Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.708203 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-dnsbx" event={"ID":"652cb29e-91a9-433f-9002-c850a78cb8a4","Type":"ContainerStarted","Data":"299ce7afff111d80f9920492e54386d85e2dddf4442de28f8a9dc4cfd7e3f8f9"} Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.709464 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8njms" event={"ID":"880e84df-6b95-4c8d-8b4c-146f26d99098","Type":"ContainerStarted","Data":"7d74d9b83080acf5a16d075592eca7ef5d2409ea7693895c368f8d3cda901e1b"} Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.710716 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-bmtbl" event={"ID":"20dd117f-6517-4b59-855d-a0f9d08409a2","Type":"ContainerStarted","Data":"7387d6258e1f74c90d90047a022ad4a05638c2538ba3df144a327d66561f60b6"} Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.711824 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-gqmrh" event={"ID":"04aba733-246c-4169-b91d-c7708aea6a71","Type":"ContainerStarted","Data":"fd7be633f08f96d843cc152472c6160f77b0aa32fb3df9799cbacdf7567c3410"} Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.712797 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qs7wq" event={"ID":"d29cd75e-9782-4f90-b9cf-95329e101cbb","Type":"ContainerStarted","Data":"06ffa38d502f0827784b4435d179ecfc3e73dcc8a895b55971269e8beade14be"} Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.714189 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-hkks9" event={"ID":"ad88e4ad-7c33-4dac-85ed-54e7f69d8625","Type":"ContainerStarted","Data":"e069760b083dc2dc7085955b19ac7265f906ac913ab37c8b706b7ec3ecd67e09"} Nov 27 17:27:51 crc kubenswrapper[4792]: I1127 17:27:51.875891 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2\" (UID: \"55002036-a4a7-469c-93be-e4483f455a4c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" Nov 27 17:27:51 crc kubenswrapper[4792]: E1127 17:27:51.876118 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 27 17:27:51 crc kubenswrapper[4792]: E1127 17:27:51.876207 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert podName:55002036-a4a7-469c-93be-e4483f455a4c nodeName:}" failed. No retries permitted until 2025-11-27 17:27:53.876186292 +0000 UTC m=+1096.219012620 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" (UID: "55002036-a4a7-469c-93be-e4483f455a4c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 27 17:27:52 crc kubenswrapper[4792]: W1127 17:27:52.179796 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57052bd6_e7c2_4ea0_bc6e_839ed4803541.slice/crio-7f9e52675027e09b9e94086d506cbd04208bc0360646bf5c500f164e241241d0 WatchSource:0}: Error finding container 7f9e52675027e09b9e94086d506cbd04208bc0360646bf5c500f164e241241d0: Status 404 returned error can't find the container with id 7f9e52675027e09b9e94086d506cbd04208bc0360646bf5c500f164e241241d0 Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.189573 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-75zc9"] Nov 27 17:27:52 crc kubenswrapper[4792]: E1127 17:27:52.198388 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zwnsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-bvh8l_openstack-operators(6f57d750-e016-4d78-bdbe-b9b1c5a21787): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 27 17:27:52 crc kubenswrapper[4792]: E1127 17:27:52.202345 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zwnsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-bvh8l_openstack-operators(6f57d750-e016-4d78-bdbe-b9b1c5a21787): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 27 17:27:52 crc kubenswrapper[4792]: E1127 17:27:52.204044 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l" podUID="6f57d750-e016-4d78-bdbe-b9b1c5a21787" Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.204811 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-8kj9s"] Nov 27 17:27:52 crc kubenswrapper[4792]: E1127 17:27:52.204979 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sgpnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-q87xm_openstack-operators(4d308a9c-7874-457f-a97f-4bb784a11783): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 27 17:27:52 crc kubenswrapper[4792]: E1127 17:27:52.205007 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.5:5001/openstack-k8s-operators/telemetry-operator:bf35154a77d3f7d42763b9d6bf295684481cdc52,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5brdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-ff79b6df5-jrwv2_openstack-operators(4193b9b8-da59-42cf-94b2-a327608c59a6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 27 17:27:52 crc kubenswrapper[4792]: E1127 17:27:52.206167 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q87xm" podUID="4d308a9c-7874-457f-a97f-4bb784a11783" Nov 27 17:27:52 crc kubenswrapper[4792]: E1127 17:27:52.207175 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5brdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-ff79b6df5-jrwv2_openstack-operators(4193b9b8-da59-42cf-94b2-a327608c59a6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 27 17:27:52 crc kubenswrapper[4792]: E1127 17:27:52.208326 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2" podUID="4193b9b8-da59-42cf-94b2-a327608c59a6" Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.211392 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-9qhqv"] Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.217329 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-fqb9p"] Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.224239 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q87xm"] Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.230203 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-z5rhr"] Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.236934 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-lp4kf"] Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.242740 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2"] Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.249134 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l"] Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.383531 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.383931 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:52 crc kubenswrapper[4792]: E1127 17:27:52.383746 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 27 17:27:52 crc kubenswrapper[4792]: E1127 17:27:52.384055 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs podName:f1ef7f3c-052e-45e2-a51a-5d114d634c12 nodeName:}" failed. No retries permitted until 2025-11-27 17:27:54.384035347 +0000 UTC m=+1096.726861675 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs") pod "openstack-operator-controller-manager-6644d5b8df-w6zdt" (UID: "f1ef7f3c-052e-45e2-a51a-5d114d634c12") : secret "metrics-server-cert" not found Nov 27 17:27:52 crc kubenswrapper[4792]: E1127 17:27:52.384115 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 27 17:27:52 crc kubenswrapper[4792]: E1127 17:27:52.384179 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs podName:f1ef7f3c-052e-45e2-a51a-5d114d634c12 nodeName:}" failed. No retries permitted until 2025-11-27 17:27:54.38416028 +0000 UTC m=+1096.726986668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs") pod "openstack-operator-controller-manager-6644d5b8df-w6zdt" (UID: "f1ef7f3c-052e-45e2-a51a-5d114d634c12") : secret "webhook-server-cert" not found Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.731376 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-fqb9p" event={"ID":"cf051bf3-d415-40eb-8071-8f0509377c34","Type":"ContainerStarted","Data":"3f881d5ffac3e1caa52d7d20874161c117e8f1ed5a703fbedc20db077a66159b"} Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.733066 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2" event={"ID":"4193b9b8-da59-42cf-94b2-a327608c59a6","Type":"ContainerStarted","Data":"57230362e30fdd20a7d241fb73cf582c65193b64c0b7ea428d65d2a061fd58d0"} Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.734575 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q87xm" event={"ID":"4d308a9c-7874-457f-a97f-4bb784a11783","Type":"ContainerStarted","Data":"f9739af435c69eda8ca562410b695e6cb1d1ae25b8290953e7f902cfd4daef82"} Nov 27 17:27:52 crc kubenswrapper[4792]: E1127 17:27:52.737098 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q87xm" podUID="4d308a9c-7874-457f-a97f-4bb784a11783" Nov 27 17:27:52 crc kubenswrapper[4792]: E1127 17:27:52.738125 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.5:5001/openstack-k8s-operators/telemetry-operator:bf35154a77d3f7d42763b9d6bf295684481cdc52\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2" podUID="4193b9b8-da59-42cf-94b2-a327608c59a6" Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.738227 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-9qhqv" event={"ID":"969e1197-2aaa-42c9-b56e-7af3ef24e205","Type":"ContainerStarted","Data":"d34692e2c40486842b8e35b406dbab4a75b3f1ee30434f988734509221bad66b"} Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.744975 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-8kj9s" event={"ID":"de56fbe3-d4c6-430f-8b94-5136fbf4a79c","Type":"ContainerStarted","Data":"7eb79d7d958da4b64ca3b510f888d8c39cc7525acadde92ddc47494ad77b10b4"} Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.761676 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-lp4kf" event={"ID":"afd4a5dc-d971-4eeb-8272-0ead1e9b4274","Type":"ContainerStarted","Data":"67e1738e361744f2bb111a23fd468e4dcb6fc1111d7c8f515a75413f2460332c"} Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.767000 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-z5rhr" event={"ID":"57052bd6-e7c2-4ea0-bc6e-839ed4803541","Type":"ContainerStarted","Data":"7f9e52675027e09b9e94086d506cbd04208bc0360646bf5c500f164e241241d0"} Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.772812 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-75zc9" event={"ID":"074f9cbe-fb30-4e1f-9156-ccc5100dcd3b","Type":"ContainerStarted","Data":"ee89bb003d8347fcb0c04d4c6d31c9cc7533345220084f41385a63943de1a7a9"} Nov 27 17:27:52 crc kubenswrapper[4792]: I1127 17:27:52.774366 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l" event={"ID":"6f57d750-e016-4d78-bdbe-b9b1c5a21787","Type":"ContainerStarted","Data":"dc386de1ade89818eeb0318b73209895f2e7f41864c6316e48e1181463e1d9ee"} Nov 27 17:27:52 crc kubenswrapper[4792]: E1127 17:27:52.784961 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l" podUID="6f57d750-e016-4d78-bdbe-b9b1c5a21787" Nov 27 17:27:53 crc kubenswrapper[4792]: I1127 17:27:53.611539 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert\") pod \"infra-operator-controller-manager-57548d458d-zklgd\" (UID: \"94d0c824-194b-4d52-ba80-1cc08301a196\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" Nov 27 17:27:53 crc kubenswrapper[4792]: E1127 17:27:53.611807 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 27 17:27:53 crc kubenswrapper[4792]: E1127 17:27:53.611855 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert podName:94d0c824-194b-4d52-ba80-1cc08301a196 nodeName:}" failed. No retries permitted until 2025-11-27 17:27:57.611841436 +0000 UTC m=+1099.954667754 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert") pod "infra-operator-controller-manager-57548d458d-zklgd" (UID: "94d0c824-194b-4d52-ba80-1cc08301a196") : secret "infra-operator-webhook-server-cert" not found Nov 27 17:27:53 crc kubenswrapper[4792]: E1127 17:27:53.787693 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l" podUID="6f57d750-e016-4d78-bdbe-b9b1c5a21787" Nov 27 17:27:53 crc kubenswrapper[4792]: E1127 17:27:53.788783 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.5:5001/openstack-k8s-operators/telemetry-operator:bf35154a77d3f7d42763b9d6bf295684481cdc52\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2" podUID="4193b9b8-da59-42cf-94b2-a327608c59a6" Nov 27 17:27:53 crc kubenswrapper[4792]: E1127 17:27:53.792396 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q87xm" podUID="4d308a9c-7874-457f-a97f-4bb784a11783" Nov 27 17:27:53 crc kubenswrapper[4792]: I1127 17:27:53.916524 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2\" (UID: \"55002036-a4a7-469c-93be-e4483f455a4c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" Nov 27 17:27:53 crc kubenswrapper[4792]: E1127 17:27:53.916670 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 27 17:27:53 crc kubenswrapper[4792]: E1127 17:27:53.917055 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert podName:55002036-a4a7-469c-93be-e4483f455a4c nodeName:}" failed. No retries permitted until 2025-11-27 17:27:57.917036837 +0000 UTC m=+1100.259863165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" (UID: "55002036-a4a7-469c-93be-e4483f455a4c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 27 17:27:54 crc kubenswrapper[4792]: E1127 17:27:54.427659 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 27 17:27:54 crc kubenswrapper[4792]: E1127 17:27:54.427767 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs podName:f1ef7f3c-052e-45e2-a51a-5d114d634c12 nodeName:}" failed. No retries permitted until 2025-11-27 17:27:58.427747183 +0000 UTC m=+1100.770573501 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs") pod "openstack-operator-controller-manager-6644d5b8df-w6zdt" (UID: "f1ef7f3c-052e-45e2-a51a-5d114d634c12") : secret "metrics-server-cert" not found Nov 27 17:27:54 crc kubenswrapper[4792]: I1127 17:27:54.427445 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:54 crc kubenswrapper[4792]: I1127 17:27:54.428265 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:54 crc kubenswrapper[4792]: E1127 17:27:54.428363 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 27 17:27:54 crc kubenswrapper[4792]: E1127 17:27:54.428403 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs podName:f1ef7f3c-052e-45e2-a51a-5d114d634c12 nodeName:}" failed. No retries permitted until 2025-11-27 17:27:58.428392829 +0000 UTC m=+1100.771219137 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs") pod "openstack-operator-controller-manager-6644d5b8df-w6zdt" (UID: "f1ef7f3c-052e-45e2-a51a-5d114d634c12") : secret "webhook-server-cert" not found Nov 27 17:27:57 crc kubenswrapper[4792]: I1127 17:27:57.689953 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert\") pod \"infra-operator-controller-manager-57548d458d-zklgd\" (UID: \"94d0c824-194b-4d52-ba80-1cc08301a196\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" Nov 27 17:27:57 crc kubenswrapper[4792]: E1127 17:27:57.691942 4792 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 27 17:27:57 crc kubenswrapper[4792]: E1127 17:27:57.692024 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert podName:94d0c824-194b-4d52-ba80-1cc08301a196 nodeName:}" failed. No retries permitted until 2025-11-27 17:28:05.692004349 +0000 UTC m=+1108.034830677 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert") pod "infra-operator-controller-manager-57548d458d-zklgd" (UID: "94d0c824-194b-4d52-ba80-1cc08301a196") : secret "infra-operator-webhook-server-cert" not found Nov 27 17:27:57 crc kubenswrapper[4792]: I1127 17:27:57.995518 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2\" (UID: \"55002036-a4a7-469c-93be-e4483f455a4c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" Nov 27 17:27:57 crc kubenswrapper[4792]: E1127 17:27:57.995743 4792 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 27 17:27:57 crc kubenswrapper[4792]: E1127 17:27:57.995813 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert podName:55002036-a4a7-469c-93be-e4483f455a4c nodeName:}" failed. No retries permitted until 2025-11-27 17:28:05.995795426 +0000 UTC m=+1108.338621744 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" (UID: "55002036-a4a7-469c-93be-e4483f455a4c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 27 17:27:58 crc kubenswrapper[4792]: I1127 17:27:58.505728 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:58 crc kubenswrapper[4792]: I1127 17:27:58.505974 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:27:58 crc kubenswrapper[4792]: E1127 17:27:58.505985 4792 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 27 17:27:58 crc kubenswrapper[4792]: E1127 17:27:58.506075 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs podName:f1ef7f3c-052e-45e2-a51a-5d114d634c12 nodeName:}" failed. No retries permitted until 2025-11-27 17:28:06.506054671 +0000 UTC m=+1108.848880989 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs") pod "openstack-operator-controller-manager-6644d5b8df-w6zdt" (UID: "f1ef7f3c-052e-45e2-a51a-5d114d634c12") : secret "webhook-server-cert" not found Nov 27 17:27:58 crc kubenswrapper[4792]: E1127 17:27:58.506129 4792 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 27 17:27:58 crc kubenswrapper[4792]: E1127 17:27:58.506195 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs podName:f1ef7f3c-052e-45e2-a51a-5d114d634c12 nodeName:}" failed. No retries permitted until 2025-11-27 17:28:06.506176574 +0000 UTC m=+1108.849002972 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs") pod "openstack-operator-controller-manager-6644d5b8df-w6zdt" (UID: "f1ef7f3c-052e-45e2-a51a-5d114d634c12") : secret "metrics-server-cert" not found Nov 27 17:28:05 crc kubenswrapper[4792]: I1127 17:28:05.746108 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert\") pod \"infra-operator-controller-manager-57548d458d-zklgd\" (UID: \"94d0c824-194b-4d52-ba80-1cc08301a196\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" Nov 27 17:28:05 crc kubenswrapper[4792]: I1127 17:28:05.758534 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94d0c824-194b-4d52-ba80-1cc08301a196-cert\") pod \"infra-operator-controller-manager-57548d458d-zklgd\" (UID: \"94d0c824-194b-4d52-ba80-1cc08301a196\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" Nov 27 17:28:05 crc kubenswrapper[4792]: E1127 17:28:05.959107 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7" Nov 27 17:28:05 crc kubenswrapper[4792]: E1127 17:28:05.959982 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2kfjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-2lfqp_openstack-operators(2fc2a1fd-6c7e-4d26-801b-5cac891fba51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:28:06 crc kubenswrapper[4792]: I1127 17:28:06.009391 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" Nov 27 17:28:06 crc kubenswrapper[4792]: I1127 17:28:06.058839 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2\" (UID: \"55002036-a4a7-469c-93be-e4483f455a4c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" Nov 27 17:28:06 crc kubenswrapper[4792]: I1127 17:28:06.062970 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/55002036-a4a7-469c-93be-e4483f455a4c-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2\" (UID: \"55002036-a4a7-469c-93be-e4483f455a4c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" Nov 27 17:28:06 crc kubenswrapper[4792]: I1127 17:28:06.084033 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" Nov 27 17:28:06 crc kubenswrapper[4792]: I1127 17:28:06.569475 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:28:06 crc kubenswrapper[4792]: I1127 17:28:06.569531 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:28:06 crc kubenswrapper[4792]: I1127 17:28:06.574151 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-metrics-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:28:06 crc kubenswrapper[4792]: I1127 17:28:06.588456 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f1ef7f3c-052e-45e2-a51a-5d114d634c12-webhook-certs\") pod \"openstack-operator-controller-manager-6644d5b8df-w6zdt\" (UID: \"f1ef7f3c-052e-45e2-a51a-5d114d634c12\") " pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:28:06 crc kubenswrapper[4792]: I1127 17:28:06.630975 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:28:08 crc kubenswrapper[4792]: I1127 17:28:08.290355 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:28:08 crc kubenswrapper[4792]: I1127 17:28:08.290787 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:28:16 crc kubenswrapper[4792]: E1127 17:28:16.444937 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2ee37ff474bee3203447df4f326a9279a515e770573153338296dd074722c677" Nov 27 17:28:16 crc kubenswrapper[4792]: E1127 17:28:16.445625 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2ee37ff474bee3203447df4f326a9279a515e770573153338296dd074722c677,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sthmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5b77f656f-gqmrh_openstack-operators(04aba733-246c-4169-b91d-c7708aea6a71): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:28:17 crc kubenswrapper[4792]: E1127 17:28:17.023285 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6" Nov 27 17:28:17 crc kubenswrapper[4792]: E1127 17:28:17.023925 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pkqxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-56897c768d-z5rhr_openstack-operators(57052bd6-e7c2-4ea0-bc6e-839ed4803541): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:28:19 crc kubenswrapper[4792]: E1127 17:28:19.412225 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:89910bc3ecceb7590d3207ac294eb7354de358cf39ef03c72323b26c598e50e6" Nov 27 17:28:19 crc kubenswrapper[4792]: E1127 17:28:19.412778 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:89910bc3ecceb7590d3207ac294eb7354de358cf39ef03c72323b26c598e50e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pxfm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5d499bf58b-dnsbx_openstack-operators(652cb29e-91a9-433f-9002-c850a78cb8a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:28:19 crc kubenswrapper[4792]: E1127 17:28:19.988499 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711" Nov 27 17:28:19 crc kubenswrapper[4792]: E1127 17:28:19.988734 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9brx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7b4567c7cf-v7kfm_openstack-operators(fd4b3618-80a1-4d23-8faa-57c206b08cf6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:28:23 crc kubenswrapper[4792]: I1127 17:28:23.158185 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-zklgd"] Nov 27 17:28:23 crc kubenswrapper[4792]: I1127 17:28:23.344853 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt"] Nov 27 17:28:23 crc kubenswrapper[4792]: I1127 17:28:23.353771 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2"] Nov 27 17:28:23 crc kubenswrapper[4792]: W1127 17:28:23.744321 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1ef7f3c_052e_45e2_a51a_5d114d634c12.slice/crio-81760daa715ea658ae3dc32ef01e0a53f57f5c2692dd12823aa279e5e2f629f3 WatchSource:0}: Error finding container 81760daa715ea658ae3dc32ef01e0a53f57f5c2692dd12823aa279e5e2f629f3: Status 404 returned error can't find the container with id 81760daa715ea658ae3dc32ef01e0a53f57f5c2692dd12823aa279e5e2f629f3 Nov 27 17:28:23 crc kubenswrapper[4792]: W1127 17:28:23.745931 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55002036_a4a7_469c_93be_e4483f455a4c.slice/crio-bedc74b83cb0bdc271b88ebe5ff70644a0f1e7f5ad5b574d16a3e3b1d35cf500 WatchSource:0}: Error finding container bedc74b83cb0bdc271b88ebe5ff70644a0f1e7f5ad5b574d16a3e3b1d35cf500: Status 404 returned error can't find the container with id bedc74b83cb0bdc271b88ebe5ff70644a0f1e7f5ad5b574d16a3e3b1d35cf500 Nov 27 17:28:24 crc kubenswrapper[4792]: I1127 17:28:24.105503 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" event={"ID":"94d0c824-194b-4d52-ba80-1cc08301a196","Type":"ContainerStarted","Data":"c50586735f33ac1c56abb0fff7930852b95b51a0ba73adbe3b6d323d9fe631c7"} Nov 27 17:28:24 crc kubenswrapper[4792]: I1127 17:28:24.108752 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" event={"ID":"55002036-a4a7-469c-93be-e4483f455a4c","Type":"ContainerStarted","Data":"bedc74b83cb0bdc271b88ebe5ff70644a0f1e7f5ad5b574d16a3e3b1d35cf500"} Nov 27 17:28:24 crc kubenswrapper[4792]: I1127 17:28:24.133751 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-z7fm9" event={"ID":"db57e7fa-0523-4a09-91a0-371fe08e5052","Type":"ContainerStarted","Data":"f10df81177b72190bb63fc224cf9d42a4393af2dc13e383b52ff94382c71a3e5"} Nov 27 17:28:24 crc kubenswrapper[4792]: I1127 17:28:24.136258 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qs7wq" event={"ID":"d29cd75e-9782-4f90-b9cf-95329e101cbb","Type":"ContainerStarted","Data":"f1d27785d970621ac4a3db2334190e18f303c4e6b8d5e9b5d5d36ead495a1d43"} Nov 27 17:28:24 crc kubenswrapper[4792]: I1127 17:28:24.138379 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" event={"ID":"f1ef7f3c-052e-45e2-a51a-5d114d634c12","Type":"ContainerStarted","Data":"81760daa715ea658ae3dc32ef01e0a53f57f5c2692dd12823aa279e5e2f629f3"} Nov 27 17:28:24 crc kubenswrapper[4792]: I1127 17:28:24.139471 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8njms" event={"ID":"880e84df-6b95-4c8d-8b4c-146f26d99098","Type":"ContainerStarted","Data":"68bcf2c6dbf1d1d769968ad98aff89b0aa0b4925942363d1e238b9c6a01d8d80"} Nov 27 17:28:24 crc kubenswrapper[4792]: I1127 17:28:24.141008 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-9qhqv" event={"ID":"969e1197-2aaa-42c9-b56e-7af3ef24e205","Type":"ContainerStarted","Data":"6726020c8ddad18be783f6aae5b9cc8a7bf583af0f0e0b30d309860ae311ebec"} Nov 27 17:28:24 crc kubenswrapper[4792]: I1127 17:28:24.144678 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-8kj9s" event={"ID":"de56fbe3-d4c6-430f-8b94-5136fbf4a79c","Type":"ContainerStarted","Data":"243bab0b65a14b5924e6c5ca7a2b9aa71cfea76cdc1380cd460fd67fc04e7f18"} Nov 27 17:28:25 crc kubenswrapper[4792]: I1127 17:28:25.154742 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-75zc9" event={"ID":"074f9cbe-fb30-4e1f-9156-ccc5100dcd3b","Type":"ContainerStarted","Data":"27447a18f2b6ffa177f5ee743d5f559414954fb8e9a276b80a05728f1a7da835"} Nov 27 17:28:25 crc kubenswrapper[4792]: I1127 17:28:25.156980 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-7gmv4" event={"ID":"5e49917e-d729-4661-a604-a603f9a8cca7","Type":"ContainerStarted","Data":"758e925b0f7a6707fa45b9e1b3c404744fdab0cda15f4713243557eacb8baf39"} Nov 27 17:28:25 crc kubenswrapper[4792]: I1127 17:28:25.158426 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-lp4kf" event={"ID":"afd4a5dc-d971-4eeb-8272-0ead1e9b4274","Type":"ContainerStarted","Data":"bdedfe19e0637529f8315b9945fa64aa58021d14f5d56e6015498d107a26c7e2"} Nov 27 17:28:25 crc kubenswrapper[4792]: I1127 17:28:25.160952 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-bmtbl" event={"ID":"20dd117f-6517-4b59-855d-a0f9d08409a2","Type":"ContainerStarted","Data":"b2953b9edf8bc82d893c7b1bac3947a8d49c08300eac2cc4effd83be65dbc85e"} Nov 27 17:28:25 crc kubenswrapper[4792]: I1127 17:28:25.162577 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-hkks9" event={"ID":"ad88e4ad-7c33-4dac-85ed-54e7f69d8625","Type":"ContainerStarted","Data":"120a6d32aaaaa35378a524a5ee5e73f8a329b1d7a619f3c1a6902011c1930a6e"} Nov 27 17:28:25 crc kubenswrapper[4792]: I1127 17:28:25.164353 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-fqb9p" event={"ID":"cf051bf3-d415-40eb-8071-8f0509377c34","Type":"ContainerStarted","Data":"fb386f4fd1464d5ff96c743fbe2872d3d83fe94fd0eadfa8e2bd5d6dbae06a35"} Nov 27 17:28:25 crc kubenswrapper[4792]: I1127 17:28:25.165544 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-xwttv" event={"ID":"f6b3f84c-a691-43dc-b9bc-f5bd2fcafb79","Type":"ContainerStarted","Data":"174a8fe6337d3ed2eab64d6d63b36e2749596643a9b1bc282b57091dfbf7f830"} Nov 27 17:28:28 crc kubenswrapper[4792]: I1127 17:28:28.198637 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q87xm" event={"ID":"4d308a9c-7874-457f-a97f-4bb784a11783","Type":"ContainerStarted","Data":"6105bbfb76feac358c95dcd55fc334a5645343a5d97a8fae3b6082838bc8ad2f"} Nov 27 17:28:28 crc kubenswrapper[4792]: I1127 17:28:28.215535 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-q87xm" podStartSLOduration=7.167523493 podStartE2EDuration="38.215519229s" podCreationTimestamp="2025-11-27 17:27:50 +0000 UTC" firstStartedPulling="2025-11-27 17:27:52.204827953 +0000 UTC m=+1094.547654271" lastFinishedPulling="2025-11-27 17:28:23.252823689 +0000 UTC m=+1125.595650007" observedRunningTime="2025-11-27 17:28:28.214078443 +0000 UTC m=+1130.556904761" watchObservedRunningTime="2025-11-27 17:28:28.215519229 +0000 UTC m=+1130.558345547" Nov 27 17:28:29 crc kubenswrapper[4792]: I1127 17:28:29.224749 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l" event={"ID":"6f57d750-e016-4d78-bdbe-b9b1c5a21787","Type":"ContainerStarted","Data":"b817f62bcd925373c16911ada0049deb3c880102319e4d6f4fb292fd061e3b0f"} Nov 27 17:28:29 crc kubenswrapper[4792]: I1127 17:28:29.231025 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" event={"ID":"f1ef7f3c-052e-45e2-a51a-5d114d634c12","Type":"ContainerStarted","Data":"7c88e6e44fcd05644ee9b69fec88215d6607cf3933212cdbb305157ec8be70ca"} Nov 27 17:28:29 crc kubenswrapper[4792]: I1127 17:28:29.269995 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" podStartSLOduration=39.269970568 podStartE2EDuration="39.269970568s" podCreationTimestamp="2025-11-27 17:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:28:29.256462554 +0000 UTC m=+1131.599288872" watchObservedRunningTime="2025-11-27 17:28:29.269970568 +0000 UTC m=+1131.612796886" Nov 27 17:28:30 crc kubenswrapper[4792]: I1127 17:28:30.238820 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:28:31 crc kubenswrapper[4792]: I1127 17:28:31.249145 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2" event={"ID":"4193b9b8-da59-42cf-94b2-a327608c59a6","Type":"ContainerStarted","Data":"07420e2cc1e8953aa488c121c157facca1b6d2ba85c83c0692a876043ab5d4f5"} Nov 27 17:28:32 crc kubenswrapper[4792]: E1127 17:28:32.396395 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-dnsbx" podUID="652cb29e-91a9-433f-9002-c850a78cb8a4" Nov 27 17:28:32 crc kubenswrapper[4792]: E1127 17:28:32.728763 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2lfqp" podUID="2fc2a1fd-6c7e-4d26-801b-5cac891fba51" Nov 27 17:28:32 crc kubenswrapper[4792]: E1127 17:28:32.759172 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-gqmrh" podUID="04aba733-246c-4169-b91d-c7708aea6a71" Nov 27 17:28:32 crc kubenswrapper[4792]: E1127 17:28:32.760247 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-v7kfm" podUID="fd4b3618-80a1-4d23-8faa-57c206b08cf6" Nov 27 17:28:32 crc kubenswrapper[4792]: E1127 17:28:32.997006 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-z5rhr" podUID="57052bd6-e7c2-4ea0-bc6e-839ed4803541" Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.265548 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-gqmrh" event={"ID":"04aba733-246c-4169-b91d-c7708aea6a71","Type":"ContainerStarted","Data":"3e3dc2e908438268f0c9be8f07e36c8b3bed3d371ad6aef0157ff31878f14aee"} Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.270616 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" event={"ID":"55002036-a4a7-469c-93be-e4483f455a4c","Type":"ContainerStarted","Data":"6b6de2dd9ee4c5d2541c8ef3c810df68f4a031d92f813b09b7ba678983d27f6e"} Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.278601 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qs7wq" event={"ID":"d29cd75e-9782-4f90-b9cf-95329e101cbb","Type":"ContainerStarted","Data":"18d1a8be835da45cac278bb3e5410b29d3949dced9ad168fd34bfaa7982baf8b"} Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.279494 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qs7wq" Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.281343 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-fqb9p" event={"ID":"cf051bf3-d415-40eb-8071-8f0509377c34","Type":"ContainerStarted","Data":"9e6cc30f9bc1db5e4d6a60970febec705b116f8d6fb54647fd8adfecb6b7ced7"} Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.281959 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-fqb9p" Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.282120 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qs7wq" Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.282943 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-z5rhr" event={"ID":"57052bd6-e7c2-4ea0-bc6e-839ed4803541","Type":"ContainerStarted","Data":"ece4ddb821bf7e9e7718456f57b61495e9151272d0fb2aa27fd7053851822590"} Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.285074 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2lfqp" event={"ID":"2fc2a1fd-6c7e-4d26-801b-5cac891fba51","Type":"ContainerStarted","Data":"80e3cb8f9700891203ab2ab73a5eaf994a5bad155a9e704f75e06e42fcabba76"} Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.287486 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-dnsbx" event={"ID":"652cb29e-91a9-433f-9002-c850a78cb8a4","Type":"ContainerStarted","Data":"29b01cd4c2b4e9dad6aa9336a3ac3c299fe09ec5fcfce554f187960077597446"} Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.297859 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-fqb9p" Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.332123 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-75zc9" event={"ID":"074f9cbe-fb30-4e1f-9156-ccc5100dcd3b","Type":"ContainerStarted","Data":"18f9295649b5503a765b49842558bb5e1e5344a15b968a8621104f603c5f0437"} Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.340704 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d77b94747-75zc9" Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.346078 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d77b94747-75zc9" Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.402259 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-7gmv4" event={"ID":"5e49917e-d729-4661-a604-a603f9a8cca7","Type":"ContainerStarted","Data":"687e8668fe0774b60a69f5c9301bdbbcebb21820c2e0483af36902adf8f6122c"} Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.403513 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-7gmv4" Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.417594 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-7gmv4" Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.426022 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2" event={"ID":"4193b9b8-da59-42cf-94b2-a327608c59a6","Type":"ContainerStarted","Data":"d7b9de79061837f653b5ed30b766251146225bb1a08140eb7b1d1cc7d451297b"} Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.426110 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2" Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.458427 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" event={"ID":"94d0c824-194b-4d52-ba80-1cc08301a196","Type":"ContainerStarted","Data":"ce47eb305d801af5f328b9ede4103dd4c18858b72d4c6c49100f6411adc6c0cb"} Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.460985 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-qs7wq" podStartSLOduration=3.48079256 podStartE2EDuration="44.460966253s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:51.085714043 +0000 UTC m=+1093.428540361" lastFinishedPulling="2025-11-27 17:28:32.065887726 +0000 UTC m=+1134.408714054" observedRunningTime="2025-11-27 17:28:33.378413121 +0000 UTC m=+1135.721239439" watchObservedRunningTime="2025-11-27 17:28:33.460966253 +0000 UTC m=+1135.803792571" Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.463969 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-fqb9p" podStartSLOduration=4.577394343 podStartE2EDuration="44.463958057s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:52.196968309 +0000 UTC m=+1094.539794627" lastFinishedPulling="2025-11-27 17:28:32.083532023 +0000 UTC m=+1134.426358341" observedRunningTime="2025-11-27 17:28:33.405754077 +0000 UTC m=+1135.748580395" watchObservedRunningTime="2025-11-27 17:28:33.463958057 +0000 UTC m=+1135.806784375" Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.471588 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-v7kfm" event={"ID":"fd4b3618-80a1-4d23-8faa-57c206b08cf6","Type":"ContainerStarted","Data":"6801ebe1d5dd9822dc41a5514e971e29c4059ad2c705236ea112ab2afc08da71"} Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.483553 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-7gmv4" podStartSLOduration=3.814866546 podStartE2EDuration="44.483537182s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:51.401076366 +0000 UTC m=+1093.743902684" lastFinishedPulling="2025-11-27 17:28:32.069746992 +0000 UTC m=+1134.412573320" observedRunningTime="2025-11-27 17:28:33.457936609 +0000 UTC m=+1135.800762927" watchObservedRunningTime="2025-11-27 17:28:33.483537182 +0000 UTC m=+1135.826363500" Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.596189 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d77b94747-75zc9" podStartSLOduration=4.694151943 podStartE2EDuration="44.596172549s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:52.174172125 +0000 UTC m=+1094.516998453" lastFinishedPulling="2025-11-27 17:28:32.076192741 +0000 UTC m=+1134.419019059" observedRunningTime="2025-11-27 17:28:33.5812658 +0000 UTC m=+1135.924092118" watchObservedRunningTime="2025-11-27 17:28:33.596172549 +0000 UTC m=+1135.938998867" Nov 27 17:28:33 crc kubenswrapper[4792]: I1127 17:28:33.663491 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2" podStartSLOduration=4.742489168 podStartE2EDuration="44.663449613s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:52.204885355 +0000 UTC m=+1094.547711673" lastFinishedPulling="2025-11-27 17:28:32.12584577 +0000 UTC m=+1134.468672118" observedRunningTime="2025-11-27 17:28:33.651148389 +0000 UTC m=+1135.993974707" watchObservedRunningTime="2025-11-27 17:28:33.663449613 +0000 UTC m=+1136.006275931" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.515283 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-9qhqv" event={"ID":"969e1197-2aaa-42c9-b56e-7af3ef24e205","Type":"ContainerStarted","Data":"8765379647cd278268b2a6ff574839ec10e63deb58d833cef412fcdc99168180"} Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.515637 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-9qhqv" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.519591 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-9qhqv" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.535332 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-bmtbl" event={"ID":"20dd117f-6517-4b59-855d-a0f9d08409a2","Type":"ContainerStarted","Data":"a1839891426a884f46c6d5e73e5b9edb06c153cb92cfcdef3e83f8126c66f24d"} Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.535487 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-bmtbl" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.537110 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-bmtbl" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.540054 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" event={"ID":"55002036-a4a7-469c-93be-e4483f455a4c","Type":"ContainerStarted","Data":"ee046c3309b39df5259ca111d0b207d78088de7953bc852d78af3152400a1eec"} Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.540969 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.546619 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-z7fm9" event={"ID":"db57e7fa-0523-4a09-91a0-371fe08e5052","Type":"ContainerStarted","Data":"11b6c971ded95c645d45a7ccacce9839d8f502379928bc87f3cfd43398d64040"} Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.547556 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-z7fm9" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.549019 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-z7fm9" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.551337 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-hkks9" event={"ID":"ad88e4ad-7c33-4dac-85ed-54e7f69d8625","Type":"ContainerStarted","Data":"e49e15438b75192c186fb162ef1b33a4c117e42150d663fb45699103fadcd1ca"} Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.553074 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-9qhqv" podStartSLOduration=5.5776929939999995 podStartE2EDuration="45.553051655s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:52.185221528 +0000 UTC m=+1094.528047846" lastFinishedPulling="2025-11-27 17:28:32.160580189 +0000 UTC m=+1134.503406507" observedRunningTime="2025-11-27 17:28:34.538047963 +0000 UTC m=+1136.880874281" watchObservedRunningTime="2025-11-27 17:28:34.553051655 +0000 UTC m=+1136.895877973" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.553269 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-hkks9" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.554170 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-hkks9" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.556099 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-xwttv" event={"ID":"f6b3f84c-a691-43dc-b9bc-f5bd2fcafb79","Type":"ContainerStarted","Data":"831d76f2dfdf9d0e7f67933986b631a068972ef6847e0575ef292d5ee776a17c"} Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.556920 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-955677c94-xwttv" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.560944 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-955677c94-xwttv" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.571184 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8njms" event={"ID":"880e84df-6b95-4c8d-8b4c-146f26d99098","Type":"ContainerStarted","Data":"53757abf0e8788d316fc27ec35ab8a740278ad2526258aee769f19052ea1c82d"} Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.571997 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8njms" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.576195 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8njms" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.584264 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l" event={"ID":"6f57d750-e016-4d78-bdbe-b9b1c5a21787","Type":"ContainerStarted","Data":"37bdf99676c20c029f21565d6153338da9a7842c0218174032effb6c9804ef82"} Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.585218 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.603717 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.614168 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2lfqp" event={"ID":"2fc2a1fd-6c7e-4d26-801b-5cac891fba51","Type":"ContainerStarted","Data":"65f0ddeca37eaad08d1357c66ae4b3c2e89906293f9ac7daf205961656111e63"} Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.615455 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2lfqp" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.626164 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-bmtbl" podStartSLOduration=4.709031301 podStartE2EDuration="45.626143723s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:51.509198401 +0000 UTC m=+1093.852024719" lastFinishedPulling="2025-11-27 17:28:32.426310813 +0000 UTC m=+1134.769137141" observedRunningTime="2025-11-27 17:28:34.603692708 +0000 UTC m=+1136.946519026" watchObservedRunningTime="2025-11-27 17:28:34.626143723 +0000 UTC m=+1136.968970041" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.635561 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-z7fm9" podStartSLOduration=3.706522157 podStartE2EDuration="45.635545816s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:50.62523869 +0000 UTC m=+1092.968065008" lastFinishedPulling="2025-11-27 17:28:32.554262349 +0000 UTC m=+1134.897088667" observedRunningTime="2025-11-27 17:28:34.62279244 +0000 UTC m=+1136.965618758" watchObservedRunningTime="2025-11-27 17:28:34.635545816 +0000 UTC m=+1136.978372134" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.637142 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-8kj9s" event={"ID":"de56fbe3-d4c6-430f-8b94-5136fbf4a79c","Type":"ContainerStarted","Data":"c6e7a604069fe8e3d25ae38d5e6a14e7f84d963c84c2525db38706c3237168d1"} Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.637890 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-8kj9s" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.646593 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-8kj9s" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.652943 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-lp4kf" event={"ID":"afd4a5dc-d971-4eeb-8272-0ead1e9b4274","Type":"ContainerStarted","Data":"c68774f1a620044495574a41dade30bc4c0c617fc05d6ffd935901187897f016"} Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.653913 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-lp4kf" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.657192 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-955677c94-xwttv" podStartSLOduration=4.267920196 podStartE2EDuration="45.657172821s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:50.884913765 +0000 UTC m=+1093.227740083" lastFinishedPulling="2025-11-27 17:28:32.27416639 +0000 UTC m=+1134.616992708" observedRunningTime="2025-11-27 17:28:34.652873495 +0000 UTC m=+1136.995699813" watchObservedRunningTime="2025-11-27 17:28:34.657172821 +0000 UTC m=+1136.999999139" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.663385 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-lp4kf" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.673206 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" event={"ID":"94d0c824-194b-4d52-ba80-1cc08301a196","Type":"ContainerStarted","Data":"bdb6cf4f288c40bd9012551b4aaf9de7d7dd653b85af97b6cb24e7cd0cf51b7f"} Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.673255 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.730994 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" podStartSLOduration=37.488386325 podStartE2EDuration="45.730969427s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:28:23.747684423 +0000 UTC m=+1126.090510741" lastFinishedPulling="2025-11-27 17:28:31.990267525 +0000 UTC m=+1134.333093843" observedRunningTime="2025-11-27 17:28:34.688620789 +0000 UTC m=+1137.031447107" watchObservedRunningTime="2025-11-27 17:28:34.730969427 +0000 UTC m=+1137.073795745" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.764658 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-8njms" podStartSLOduration=4.871657045 podStartE2EDuration="45.76462654s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:51.426841244 +0000 UTC m=+1093.769667562" lastFinishedPulling="2025-11-27 17:28:32.319810729 +0000 UTC m=+1134.662637057" observedRunningTime="2025-11-27 17:28:34.732234738 +0000 UTC m=+1137.075061056" watchObservedRunningTime="2025-11-27 17:28:34.76462654 +0000 UTC m=+1137.107452858" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.773276 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-bvh8l" podStartSLOduration=4.827646798 podStartE2EDuration="44.773257983s" podCreationTimestamp="2025-11-27 17:27:50 +0000 UTC" firstStartedPulling="2025-11-27 17:27:52.198266061 +0000 UTC m=+1094.541092379" lastFinishedPulling="2025-11-27 17:28:32.143877246 +0000 UTC m=+1134.486703564" observedRunningTime="2025-11-27 17:28:34.761042261 +0000 UTC m=+1137.103868569" watchObservedRunningTime="2025-11-27 17:28:34.773257983 +0000 UTC m=+1137.116084301" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.807203 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-hkks9" podStartSLOduration=4.678640258 podStartE2EDuration="45.807183633s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:51.073149112 +0000 UTC m=+1093.415975430" lastFinishedPulling="2025-11-27 17:28:32.201692487 +0000 UTC m=+1134.544518805" observedRunningTime="2025-11-27 17:28:34.79776687 +0000 UTC m=+1137.140593208" watchObservedRunningTime="2025-11-27 17:28:34.807183633 +0000 UTC m=+1137.150009951" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.869821 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" podStartSLOduration=37.144070406 podStartE2EDuration="45.869797262s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:28:23.264098108 +0000 UTC m=+1125.606924426" lastFinishedPulling="2025-11-27 17:28:31.989824964 +0000 UTC m=+1134.332651282" observedRunningTime="2025-11-27 17:28:34.844990538 +0000 UTC m=+1137.187816856" watchObservedRunningTime="2025-11-27 17:28:34.869797262 +0000 UTC m=+1137.212623580" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.873776 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-8kj9s" podStartSLOduration=5.859206159 podStartE2EDuration="45.87375642s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:52.172167025 +0000 UTC m=+1094.514993343" lastFinishedPulling="2025-11-27 17:28:32.186717286 +0000 UTC m=+1134.529543604" observedRunningTime="2025-11-27 17:28:34.863956767 +0000 UTC m=+1137.206783095" watchObservedRunningTime="2025-11-27 17:28:34.87375642 +0000 UTC m=+1137.216582738" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.890342 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2lfqp" podStartSLOduration=2.976536845 podStartE2EDuration="45.89031596s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:51.373306079 +0000 UTC m=+1093.716132397" lastFinishedPulling="2025-11-27 17:28:34.287085194 +0000 UTC m=+1136.629911512" observedRunningTime="2025-11-27 17:28:34.885338216 +0000 UTC m=+1137.228164534" watchObservedRunningTime="2025-11-27 17:28:34.89031596 +0000 UTC m=+1137.233142278" Nov 27 17:28:34 crc kubenswrapper[4792]: I1127 17:28:34.966148 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-lp4kf" podStartSLOduration=5.657044338 podStartE2EDuration="45.966130135s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:52.187029823 +0000 UTC m=+1094.529856151" lastFinishedPulling="2025-11-27 17:28:32.49611562 +0000 UTC m=+1134.838941948" observedRunningTime="2025-11-27 17:28:34.918284432 +0000 UTC m=+1137.261110760" watchObservedRunningTime="2025-11-27 17:28:34.966130135 +0000 UTC m=+1137.308956463" Nov 27 17:28:35 crc kubenswrapper[4792]: I1127 17:28:35.680951 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-dnsbx" event={"ID":"652cb29e-91a9-433f-9002-c850a78cb8a4","Type":"ContainerStarted","Data":"78f059d9878e597d8b0e5de7fe863c55e788338a51d8cb3290fc740781c2758d"} Nov 27 17:28:35 crc kubenswrapper[4792]: I1127 17:28:35.681140 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-dnsbx" Nov 27 17:28:35 crc kubenswrapper[4792]: I1127 17:28:35.682711 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-gqmrh" event={"ID":"04aba733-246c-4169-b91d-c7708aea6a71","Type":"ContainerStarted","Data":"3149802b656dd0679a8325dc7e9b52c98d5050e47772960e5e5b59cd0e41e160"} Nov 27 17:28:35 crc kubenswrapper[4792]: I1127 17:28:35.682791 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-gqmrh" Nov 27 17:28:35 crc kubenswrapper[4792]: I1127 17:28:35.684393 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-z5rhr" event={"ID":"57052bd6-e7c2-4ea0-bc6e-839ed4803541","Type":"ContainerStarted","Data":"b3035b88e615f46aa09f0784bc7af2375264b4122c8cb8ab4f5037db06dc6737"} Nov 27 17:28:35 crc kubenswrapper[4792]: I1127 17:28:35.684519 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-z5rhr" Nov 27 17:28:35 crc kubenswrapper[4792]: I1127 17:28:35.686049 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-v7kfm" event={"ID":"fd4b3618-80a1-4d23-8faa-57c206b08cf6","Type":"ContainerStarted","Data":"6faf2d0919c72cede530d4f75d1049b2ec2d6e84bf5b28a8ab95cfa16d9e9ece"} Nov 27 17:28:35 crc kubenswrapper[4792]: I1127 17:28:35.686840 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-v7kfm" Nov 27 17:28:35 crc kubenswrapper[4792]: I1127 17:28:35.709966 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-dnsbx" podStartSLOduration=3.682964323 podStartE2EDuration="46.709946569s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:51.375809841 +0000 UTC m=+1093.718636159" lastFinishedPulling="2025-11-27 17:28:34.402291044 +0000 UTC m=+1136.745618405" observedRunningTime="2025-11-27 17:28:35.708971845 +0000 UTC m=+1138.051798173" watchObservedRunningTime="2025-11-27 17:28:35.709946569 +0000 UTC m=+1138.052772897" Nov 27 17:28:35 crc kubenswrapper[4792]: I1127 17:28:35.731640 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-gqmrh" podStartSLOduration=3.591421639 podStartE2EDuration="46.731620166s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:51.426400983 +0000 UTC m=+1093.769227301" lastFinishedPulling="2025-11-27 17:28:34.5665995 +0000 UTC m=+1136.909425828" observedRunningTime="2025-11-27 17:28:35.724361816 +0000 UTC m=+1138.067188144" watchObservedRunningTime="2025-11-27 17:28:35.731620166 +0000 UTC m=+1138.074446504" Nov 27 17:28:35 crc kubenswrapper[4792]: I1127 17:28:35.751267 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-v7kfm" podStartSLOduration=3.696019287 podStartE2EDuration="46.751248781s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:51.514178745 +0000 UTC m=+1093.857005063" lastFinishedPulling="2025-11-27 17:28:34.569408239 +0000 UTC m=+1136.912234557" observedRunningTime="2025-11-27 17:28:35.742852374 +0000 UTC m=+1138.085678702" watchObservedRunningTime="2025-11-27 17:28:35.751248781 +0000 UTC m=+1138.094075109" Nov 27 17:28:35 crc kubenswrapper[4792]: I1127 17:28:35.761420 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-z5rhr" podStartSLOduration=4.546183813 podStartE2EDuration="46.761400723s" podCreationTimestamp="2025-11-27 17:27:49 +0000 UTC" firstStartedPulling="2025-11-27 17:27:52.187059414 +0000 UTC m=+1094.529885732" lastFinishedPulling="2025-11-27 17:28:34.402276324 +0000 UTC m=+1136.745102642" observedRunningTime="2025-11-27 17:28:35.758102051 +0000 UTC m=+1138.100928369" watchObservedRunningTime="2025-11-27 17:28:35.761400723 +0000 UTC m=+1138.104227061" Nov 27 17:28:36 crc kubenswrapper[4792]: I1127 17:28:36.643689 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6644d5b8df-w6zdt" Nov 27 17:28:38 crc kubenswrapper[4792]: I1127 17:28:38.290304 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:28:38 crc kubenswrapper[4792]: I1127 17:28:38.290745 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:28:38 crc kubenswrapper[4792]: I1127 17:28:38.290811 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:28:38 crc kubenswrapper[4792]: I1127 17:28:38.292771 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d60cedcb892e88638661f9a31eeedcc56ec861fc1db68b55e1cd3c8c8a97edef"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 17:28:38 crc kubenswrapper[4792]: I1127 17:28:38.292986 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://d60cedcb892e88638661f9a31eeedcc56ec861fc1db68b55e1cd3c8c8a97edef" gracePeriod=600 Nov 27 17:28:38 crc kubenswrapper[4792]: I1127 17:28:38.711231 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="d60cedcb892e88638661f9a31eeedcc56ec861fc1db68b55e1cd3c8c8a97edef" exitCode=0 Nov 27 17:28:38 crc kubenswrapper[4792]: I1127 17:28:38.711309 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"d60cedcb892e88638661f9a31eeedcc56ec861fc1db68b55e1cd3c8c8a97edef"} Nov 27 17:28:38 crc kubenswrapper[4792]: I1127 17:28:38.711996 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"96c8b617d1cd650967466a2e285f319ed4525e9b0567767b82907caf8e1a4e24"} Nov 27 17:28:38 crc kubenswrapper[4792]: I1127 17:28:38.712075 4792 scope.go:117] "RemoveContainer" containerID="eb4dfc187c50b610e23a24ccd114a91f4e733187652bfdcb858d9943f47d0623" Nov 27 17:28:39 crc kubenswrapper[4792]: I1127 17:28:39.986995 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-gqmrh" Nov 27 17:28:40 crc kubenswrapper[4792]: I1127 17:28:40.158296 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-dnsbx" Nov 27 17:28:40 crc kubenswrapper[4792]: I1127 17:28:40.279248 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-2lfqp" Nov 27 17:28:40 crc kubenswrapper[4792]: I1127 17:28:40.361859 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-v7kfm" Nov 27 17:28:40 crc kubenswrapper[4792]: I1127 17:28:40.699181 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-z5rhr" Nov 27 17:28:40 crc kubenswrapper[4792]: I1127 17:28:40.881067 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-ff79b6df5-jrwv2" Nov 27 17:28:46 crc kubenswrapper[4792]: I1127 17:28:46.020837 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-zklgd" Nov 27 17:28:46 crc kubenswrapper[4792]: I1127 17:28:46.093049 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.045749 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z7gzw"] Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.049474 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-z7gzw" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.052702 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-lz9bb" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.052716 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.052841 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.052932 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.058853 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z7gzw"] Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.098804 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-f9qf7"] Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.100706 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.103124 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.122786 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-f9qf7"] Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.154353 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37696559-360b-4fa3-94e5-253dff5f4b2b-config\") pod \"dnsmasq-dns-675f4bcbfc-z7gzw\" (UID: \"37696559-360b-4fa3-94e5-253dff5f4b2b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z7gzw" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.154433 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbjvg\" (UniqueName: \"kubernetes.io/projected/37696559-360b-4fa3-94e5-253dff5f4b2b-kube-api-access-mbjvg\") pod \"dnsmasq-dns-675f4bcbfc-z7gzw\" (UID: \"37696559-360b-4fa3-94e5-253dff5f4b2b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z7gzw" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.255867 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-config\") pod \"dnsmasq-dns-78dd6ddcc-f9qf7\" (UID: \"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.255931 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37696559-360b-4fa3-94e5-253dff5f4b2b-config\") pod \"dnsmasq-dns-675f4bcbfc-z7gzw\" (UID: \"37696559-360b-4fa3-94e5-253dff5f4b2b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z7gzw" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.255966 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbjvg\" (UniqueName: \"kubernetes.io/projected/37696559-360b-4fa3-94e5-253dff5f4b2b-kube-api-access-mbjvg\") pod \"dnsmasq-dns-675f4bcbfc-z7gzw\" (UID: \"37696559-360b-4fa3-94e5-253dff5f4b2b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z7gzw" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.256011 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-f9qf7\" (UID: \"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.256051 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2b84\" (UniqueName: \"kubernetes.io/projected/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-kube-api-access-f2b84\") pod \"dnsmasq-dns-78dd6ddcc-f9qf7\" (UID: \"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.256936 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37696559-360b-4fa3-94e5-253dff5f4b2b-config\") pod \"dnsmasq-dns-675f4bcbfc-z7gzw\" (UID: \"37696559-360b-4fa3-94e5-253dff5f4b2b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z7gzw" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.298539 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbjvg\" (UniqueName: \"kubernetes.io/projected/37696559-360b-4fa3-94e5-253dff5f4b2b-kube-api-access-mbjvg\") pod \"dnsmasq-dns-675f4bcbfc-z7gzw\" (UID: \"37696559-360b-4fa3-94e5-253dff5f4b2b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z7gzw" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.358071 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-config\") pod \"dnsmasq-dns-78dd6ddcc-f9qf7\" (UID: \"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.358156 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-f9qf7\" (UID: \"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.358201 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2b84\" (UniqueName: \"kubernetes.io/projected/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-kube-api-access-f2b84\") pod \"dnsmasq-dns-78dd6ddcc-f9qf7\" (UID: \"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.359005 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-config\") pod \"dnsmasq-dns-78dd6ddcc-f9qf7\" (UID: \"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.359103 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-f9qf7\" (UID: \"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.375727 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-z7gzw" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.381508 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2b84\" (UniqueName: \"kubernetes.io/projected/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-kube-api-access-f2b84\") pod \"dnsmasq-dns-78dd6ddcc-f9qf7\" (UID: \"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.419443 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.878422 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z7gzw"] Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.945047 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-z7gzw" event={"ID":"37696559-360b-4fa3-94e5-253dff5f4b2b","Type":"ContainerStarted","Data":"d637df564cb1e9418633d5ec617e8ffdfd308f1ac5d0a5f39f066b852cb2f8ba"} Nov 27 17:29:04 crc kubenswrapper[4792]: I1127 17:29:04.969353 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-f9qf7"] Nov 27 17:29:04 crc kubenswrapper[4792]: W1127 17:29:04.975576 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffc3ed2d_f1c6_405a_af02_ec63bc1929c9.slice/crio-8accaf555eea921cf0285bd094d4a78113416a7936b39e79c81971b67d476cb7 WatchSource:0}: Error finding container 8accaf555eea921cf0285bd094d4a78113416a7936b39e79c81971b67d476cb7: Status 404 returned error can't find the container with id 8accaf555eea921cf0285bd094d4a78113416a7936b39e79c81971b67d476cb7 Nov 27 17:29:05 crc kubenswrapper[4792]: I1127 17:29:05.791113 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z7gzw"] Nov 27 17:29:05 crc kubenswrapper[4792]: I1127 17:29:05.825834 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-7qzwv"] Nov 27 17:29:05 crc kubenswrapper[4792]: I1127 17:29:05.834129 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" Nov 27 17:29:05 crc kubenswrapper[4792]: I1127 17:29:05.842172 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-7qzwv"] Nov 27 17:29:05 crc kubenswrapper[4792]: I1127 17:29:05.979307 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" event={"ID":"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9","Type":"ContainerStarted","Data":"8accaf555eea921cf0285bd094d4a78113416a7936b39e79c81971b67d476cb7"} Nov 27 17:29:05 crc kubenswrapper[4792]: I1127 17:29:05.992301 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60ae1903-fc35-46be-8965-44bfa16135ba-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-7qzwv\" (UID: \"60ae1903-fc35-46be-8965-44bfa16135ba\") " pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" Nov 27 17:29:05 crc kubenswrapper[4792]: I1127 17:29:05.992380 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc2jh\" (UniqueName: \"kubernetes.io/projected/60ae1903-fc35-46be-8965-44bfa16135ba-kube-api-access-hc2jh\") pod \"dnsmasq-dns-5ccc8479f9-7qzwv\" (UID: \"60ae1903-fc35-46be-8965-44bfa16135ba\") " pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" Nov 27 17:29:05 crc kubenswrapper[4792]: I1127 17:29:05.992439 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60ae1903-fc35-46be-8965-44bfa16135ba-config\") pod \"dnsmasq-dns-5ccc8479f9-7qzwv\" (UID: \"60ae1903-fc35-46be-8965-44bfa16135ba\") " pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.076952 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-f9qf7"] Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.094332 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60ae1903-fc35-46be-8965-44bfa16135ba-config\") pod \"dnsmasq-dns-5ccc8479f9-7qzwv\" (UID: \"60ae1903-fc35-46be-8965-44bfa16135ba\") " pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.094714 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60ae1903-fc35-46be-8965-44bfa16135ba-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-7qzwv\" (UID: \"60ae1903-fc35-46be-8965-44bfa16135ba\") " pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.095479 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60ae1903-fc35-46be-8965-44bfa16135ba-config\") pod \"dnsmasq-dns-5ccc8479f9-7qzwv\" (UID: \"60ae1903-fc35-46be-8965-44bfa16135ba\") " pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.097790 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc2jh\" (UniqueName: \"kubernetes.io/projected/60ae1903-fc35-46be-8965-44bfa16135ba-kube-api-access-hc2jh\") pod \"dnsmasq-dns-5ccc8479f9-7qzwv\" (UID: \"60ae1903-fc35-46be-8965-44bfa16135ba\") " pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.098708 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60ae1903-fc35-46be-8965-44bfa16135ba-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-7qzwv\" (UID: \"60ae1903-fc35-46be-8965-44bfa16135ba\") " pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.107470 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-znctf"] Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.109050 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-znctf" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.127204 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc2jh\" (UniqueName: \"kubernetes.io/projected/60ae1903-fc35-46be-8965-44bfa16135ba-kube-api-access-hc2jh\") pod \"dnsmasq-dns-5ccc8479f9-7qzwv\" (UID: \"60ae1903-fc35-46be-8965-44bfa16135ba\") " pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.131054 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-znctf"] Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.160373 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.201400 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8be51e3d-884a-49bc-a51e-98d57ad245df-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-znctf\" (UID: \"8be51e3d-884a-49bc-a51e-98d57ad245df\") " pod="openstack/dnsmasq-dns-57d769cc4f-znctf" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.201522 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be51e3d-884a-49bc-a51e-98d57ad245df-config\") pod \"dnsmasq-dns-57d769cc4f-znctf\" (UID: \"8be51e3d-884a-49bc-a51e-98d57ad245df\") " pod="openstack/dnsmasq-dns-57d769cc4f-znctf" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.201558 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48q7f\" (UniqueName: \"kubernetes.io/projected/8be51e3d-884a-49bc-a51e-98d57ad245df-kube-api-access-48q7f\") pod \"dnsmasq-dns-57d769cc4f-znctf\" (UID: \"8be51e3d-884a-49bc-a51e-98d57ad245df\") " pod="openstack/dnsmasq-dns-57d769cc4f-znctf" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.303678 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be51e3d-884a-49bc-a51e-98d57ad245df-config\") pod \"dnsmasq-dns-57d769cc4f-znctf\" (UID: \"8be51e3d-884a-49bc-a51e-98d57ad245df\") " pod="openstack/dnsmasq-dns-57d769cc4f-znctf" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.303743 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48q7f\" (UniqueName: \"kubernetes.io/projected/8be51e3d-884a-49bc-a51e-98d57ad245df-kube-api-access-48q7f\") pod \"dnsmasq-dns-57d769cc4f-znctf\" (UID: \"8be51e3d-884a-49bc-a51e-98d57ad245df\") " pod="openstack/dnsmasq-dns-57d769cc4f-znctf" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.303874 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8be51e3d-884a-49bc-a51e-98d57ad245df-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-znctf\" (UID: \"8be51e3d-884a-49bc-a51e-98d57ad245df\") " pod="openstack/dnsmasq-dns-57d769cc4f-znctf" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.304792 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be51e3d-884a-49bc-a51e-98d57ad245df-config\") pod \"dnsmasq-dns-57d769cc4f-znctf\" (UID: \"8be51e3d-884a-49bc-a51e-98d57ad245df\") " pod="openstack/dnsmasq-dns-57d769cc4f-znctf" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.305019 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8be51e3d-884a-49bc-a51e-98d57ad245df-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-znctf\" (UID: \"8be51e3d-884a-49bc-a51e-98d57ad245df\") " pod="openstack/dnsmasq-dns-57d769cc4f-znctf" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.348463 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48q7f\" (UniqueName: \"kubernetes.io/projected/8be51e3d-884a-49bc-a51e-98d57ad245df-kube-api-access-48q7f\") pod \"dnsmasq-dns-57d769cc4f-znctf\" (UID: \"8be51e3d-884a-49bc-a51e-98d57ad245df\") " pod="openstack/dnsmasq-dns-57d769cc4f-znctf" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.484326 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-znctf" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.841681 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-7qzwv"] Nov 27 17:29:06 crc kubenswrapper[4792]: W1127 17:29:06.842834 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60ae1903_fc35_46be_8965_44bfa16135ba.slice/crio-d4333a9308db5beb263d15b051920f9c7cf00e92823fb8fc69457d544a3df777 WatchSource:0}: Error finding container d4333a9308db5beb263d15b051920f9c7cf00e92823fb8fc69457d544a3df777: Status 404 returned error can't find the container with id d4333a9308db5beb263d15b051920f9c7cf00e92823fb8fc69457d544a3df777 Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.940888 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.942363 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.944047 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.944259 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.944371 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.944480 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.944750 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.944851 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.944938 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tdpx2" Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.975934 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 27 17:29:06 crc kubenswrapper[4792]: I1127 17:29:06.997766 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" event={"ID":"60ae1903-fc35-46be-8965-44bfa16135ba","Type":"ContainerStarted","Data":"d4333a9308db5beb263d15b051920f9c7cf00e92823fb8fc69457d544a3df777"} Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.025533 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbbf8d9a-2069-4544-92db-ad5174339775-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.025574 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnlnd\" (UniqueName: \"kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-kube-api-access-wnlnd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.025709 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.025749 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.025904 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.026048 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.026117 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.026156 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbbf8d9a-2069-4544-92db-ad5174339775-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.026260 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.026299 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.026361 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.047231 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-znctf"] Nov 27 17:29:07 crc kubenswrapper[4792]: W1127 17:29:07.060417 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8be51e3d_884a_49bc_a51e_98d57ad245df.slice/crio-15b1dd4b3f3c01bfabec35378cf54ef03904fc106110a830cb25209b582b724c WatchSource:0}: Error finding container 15b1dd4b3f3c01bfabec35378cf54ef03904fc106110a830cb25209b582b724c: Status 404 returned error can't find the container with id 15b1dd4b3f3c01bfabec35378cf54ef03904fc106110a830cb25209b582b724c Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.127724 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.127764 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.127797 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.127832 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbbf8d9a-2069-4544-92db-ad5174339775-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.127847 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnlnd\" (UniqueName: \"kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-kube-api-access-wnlnd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.127868 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.127885 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.127938 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.127964 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.127986 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.128004 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbbf8d9a-2069-4544-92db-ad5174339775-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.133956 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbbf8d9a-2069-4544-92db-ad5174339775-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.134147 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.137525 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.137834 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.138115 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.138911 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.138976 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.141190 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.142481 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbbf8d9a-2069-4544-92db-ad5174339775-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.147225 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.187168 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnlnd\" (UniqueName: \"kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-kube-api-access-wnlnd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.199278 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.279562 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.281511 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.282439 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.288524 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.289360 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.289520 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.289522 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.289606 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.289920 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.290005 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.290018 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-52cs2" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.331823 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.331886 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.331915 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/27d6022e-eea3-41e9-b880-620328dc5d78-pod-info\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.331952 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-server-conf\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.331973 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.331992 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/27d6022e-eea3-41e9-b880-620328dc5d78-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.332018 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.332065 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.332084 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-config-data\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.332132 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq4sm\" (UniqueName: \"kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-kube-api-access-bq4sm\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.332207 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.434834 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.435195 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.435220 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/27d6022e-eea3-41e9-b880-620328dc5d78-pod-info\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.435256 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-server-conf\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.435272 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.435296 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/27d6022e-eea3-41e9-b880-620328dc5d78-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.435323 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.435374 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.435392 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-config-data\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.435442 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq4sm\" (UniqueName: \"kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-kube-api-access-bq4sm\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.435485 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.435561 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.435936 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.436250 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.443973 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-server-conf\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.444785 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-config-data\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.445425 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.447730 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.456334 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq4sm\" (UniqueName: \"kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-kube-api-access-bq4sm\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.458172 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/27d6022e-eea3-41e9-b880-620328dc5d78-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.462515 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/27d6022e-eea3-41e9-b880-620328dc5d78-pod-info\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.478492 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.483087 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.618683 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 27 17:29:07 crc kubenswrapper[4792]: I1127 17:29:07.919767 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.025281 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbbf8d9a-2069-4544-92db-ad5174339775","Type":"ContainerStarted","Data":"1c9ebfea36917607572bf31592276546b601a0cd133b9c06868b8893d290788c"} Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.026519 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-znctf" event={"ID":"8be51e3d-884a-49bc-a51e-98d57ad245df","Type":"ContainerStarted","Data":"15b1dd4b3f3c01bfabec35378cf54ef03904fc106110a830cb25209b582b724c"} Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.238343 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.771500 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.773776 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.786851 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.787504 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-nklk4" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.787924 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.791095 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.797426 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.823185 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.880757 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ed6358b-2030-436d-a847-724a53f802ea-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.880950 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed6358b-2030-436d-a847-724a53f802ea-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.881159 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vnw7\" (UniqueName: \"kubernetes.io/projected/8ed6358b-2030-436d-a847-724a53f802ea-kube-api-access-7vnw7\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.881221 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8ed6358b-2030-436d-a847-724a53f802ea-config-data-default\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.881380 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed6358b-2030-436d-a847-724a53f802ea-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.881419 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.881670 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8ed6358b-2030-436d-a847-724a53f802ea-kolla-config\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.881752 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8ed6358b-2030-436d-a847-724a53f802ea-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.992596 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ed6358b-2030-436d-a847-724a53f802ea-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.992901 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed6358b-2030-436d-a847-724a53f802ea-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.992925 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vnw7\" (UniqueName: \"kubernetes.io/projected/8ed6358b-2030-436d-a847-724a53f802ea-kube-api-access-7vnw7\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.992946 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8ed6358b-2030-436d-a847-724a53f802ea-config-data-default\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.992983 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed6358b-2030-436d-a847-724a53f802ea-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.993002 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.993024 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8ed6358b-2030-436d-a847-724a53f802ea-kolla-config\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.993055 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8ed6358b-2030-436d-a847-724a53f802ea-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.993438 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8ed6358b-2030-436d-a847-724a53f802ea-config-data-generated\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.994488 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ed6358b-2030-436d-a847-724a53f802ea-operator-scripts\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.995451 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.996328 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8ed6358b-2030-436d-a847-724a53f802ea-kolla-config\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:08 crc kubenswrapper[4792]: I1127 17:29:08.996993 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8ed6358b-2030-436d-a847-724a53f802ea-config-data-default\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:09 crc kubenswrapper[4792]: I1127 17:29:09.011350 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed6358b-2030-436d-a847-724a53f802ea-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:09 crc kubenswrapper[4792]: I1127 17:29:09.025671 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed6358b-2030-436d-a847-724a53f802ea-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:09 crc kubenswrapper[4792]: I1127 17:29:09.033225 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vnw7\" (UniqueName: \"kubernetes.io/projected/8ed6358b-2030-436d-a847-724a53f802ea-kube-api-access-7vnw7\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:09 crc kubenswrapper[4792]: I1127 17:29:09.060570 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"8ed6358b-2030-436d-a847-724a53f802ea\") " pod="openstack/openstack-galera-0" Nov 27 17:29:09 crc kubenswrapper[4792]: I1127 17:29:09.100774 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 27 17:29:09 crc kubenswrapper[4792]: I1127 17:29:09.125768 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27d6022e-eea3-41e9-b880-620328dc5d78","Type":"ContainerStarted","Data":"07eba9bf2ea9734a419856756e27fc2df2175b506bc259156f6c2a403e3af1c6"} Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:09.865571 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.194612 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8ed6358b-2030-436d-a847-724a53f802ea","Type":"ContainerStarted","Data":"4d31ad9e2f42b5acc3f38552c29a87763a5ed6af847cc8abaacdcaf45f7d5f88"} Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.475073 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.476666 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.483579 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-jtdsl" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.483959 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.484222 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.484848 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.485343 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.650301 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4rqf\" (UniqueName: \"kubernetes.io/projected/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-kube-api-access-w4rqf\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.650387 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.650849 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.650890 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.650920 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.651150 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.651325 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.651394 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.719517 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.731975 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.738087 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-svh5d" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.738319 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.738433 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.740706 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.752928 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.752975 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.752996 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.753038 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.753076 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.753102 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.753131 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4rqf\" (UniqueName: \"kubernetes.io/projected/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-kube-api-access-w4rqf\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.753169 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.756512 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.758267 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.759285 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.759844 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.765331 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.767558 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.773592 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.783423 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4rqf\" (UniqueName: \"kubernetes.io/projected/1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b-kube-api-access-w4rqf\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.808368 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b\") " pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.826106 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.854656 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666\") " pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.854783 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666-config-data\") pod \"memcached-0\" (UID: \"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666\") " pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.855020 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666-kolla-config\") pod \"memcached-0\" (UID: \"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666\") " pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.855076 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdhkd\" (UniqueName: \"kubernetes.io/projected/a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666-kube-api-access-sdhkd\") pod \"memcached-0\" (UID: \"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666\") " pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.855169 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666\") " pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.959617 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666\") " pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.959685 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666\") " pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.959723 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666-config-data\") pod \"memcached-0\" (UID: \"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666\") " pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.959751 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666-kolla-config\") pod \"memcached-0\" (UID: \"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666\") " pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.960817 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666-kolla-config\") pod \"memcached-0\" (UID: \"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666\") " pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.960825 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666-config-data\") pod \"memcached-0\" (UID: \"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666\") " pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.960884 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdhkd\" (UniqueName: \"kubernetes.io/projected/a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666-kube-api-access-sdhkd\") pod \"memcached-0\" (UID: \"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666\") " pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.963497 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666\") " pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.977728 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666\") " pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:10.991173 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdhkd\" (UniqueName: \"kubernetes.io/projected/a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666-kube-api-access-sdhkd\") pod \"memcached-0\" (UID: \"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666\") " pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:11.187538 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:12.753968 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:12.761835 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:12.766061 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-t9bn4" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:12.775347 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:12.809839 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kklf6\" (UniqueName: \"kubernetes.io/projected/1ba0cf2f-cd7d-4133-9746-61abf95e4420-kube-api-access-kklf6\") pod \"kube-state-metrics-0\" (UID: \"1ba0cf2f-cd7d-4133-9746-61abf95e4420\") " pod="openstack/kube-state-metrics-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:12.911763 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kklf6\" (UniqueName: \"kubernetes.io/projected/1ba0cf2f-cd7d-4133-9746-61abf95e4420-kube-api-access-kklf6\") pod \"kube-state-metrics-0\" (UID: \"1ba0cf2f-cd7d-4133-9746-61abf95e4420\") " pod="openstack/kube-state-metrics-0" Nov 27 17:29:12 crc kubenswrapper[4792]: I1127 17:29:12.956697 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kklf6\" (UniqueName: \"kubernetes.io/projected/1ba0cf2f-cd7d-4133-9746-61abf95e4420-kube-api-access-kklf6\") pod \"kube-state-metrics-0\" (UID: \"1ba0cf2f-cd7d-4133-9746-61abf95e4420\") " pod="openstack/kube-state-metrics-0" Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.089910 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.498301 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-prnwq"] Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.499737 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-prnwq" Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.517251 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-prnwq"] Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.522818 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-bhljc" Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.530067 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.614191 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.632249 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbd5w\" (UniqueName: \"kubernetes.io/projected/6df6ef32-ac48-4c52-9c23-95926cf8c67d-kube-api-access-qbd5w\") pod \"observability-ui-dashboards-7d5fb4cbfb-prnwq\" (UID: \"6df6ef32-ac48-4c52-9c23-95926cf8c67d\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-prnwq" Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.632481 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6df6ef32-ac48-4c52-9c23-95926cf8c67d-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-prnwq\" (UID: \"6df6ef32-ac48-4c52-9c23-95926cf8c67d\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-prnwq" Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.649587 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.737314 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbd5w\" (UniqueName: \"kubernetes.io/projected/6df6ef32-ac48-4c52-9c23-95926cf8c67d-kube-api-access-qbd5w\") pod \"observability-ui-dashboards-7d5fb4cbfb-prnwq\" (UID: \"6df6ef32-ac48-4c52-9c23-95926cf8c67d\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-prnwq" Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.737413 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6df6ef32-ac48-4c52-9c23-95926cf8c67d-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-prnwq\" (UID: \"6df6ef32-ac48-4c52-9c23-95926cf8c67d\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-prnwq" Nov 27 17:29:13 crc kubenswrapper[4792]: E1127 17:29:13.738008 4792 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Nov 27 17:29:13 crc kubenswrapper[4792]: E1127 17:29:13.738063 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6df6ef32-ac48-4c52-9c23-95926cf8c67d-serving-cert podName:6df6ef32-ac48-4c52-9c23-95926cf8c67d nodeName:}" failed. No retries permitted until 2025-11-27 17:29:14.238046761 +0000 UTC m=+1176.580873079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6df6ef32-ac48-4c52-9c23-95926cf8c67d-serving-cert") pod "observability-ui-dashboards-7d5fb4cbfb-prnwq" (UID: "6df6ef32-ac48-4c52-9c23-95926cf8c67d") : secret "observability-ui-dashboards" not found Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.791600 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbd5w\" (UniqueName: \"kubernetes.io/projected/6df6ef32-ac48-4c52-9c23-95926cf8c67d-kube-api-access-qbd5w\") pod \"observability-ui-dashboards-7d5fb4cbfb-prnwq\" (UID: \"6df6ef32-ac48-4c52-9c23-95926cf8c67d\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-prnwq" Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.913209 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5c8f5f7d59-gzjgn"] Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.914726 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:13 crc kubenswrapper[4792]: I1127 17:29:13.924302 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c8f5f7d59-gzjgn"] Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.013052 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.015342 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.021432 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.022085 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jcr5s" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.023487 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.023906 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.033966 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.035987 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.036230 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.056036 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5eae0fad-6d53-49e3-bb12-1fcdbc604315-oauth-serving-cert\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.056085 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5eae0fad-6d53-49e3-bb12-1fcdbc604315-console-serving-cert\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.056124 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5eae0fad-6d53-49e3-bb12-1fcdbc604315-console-config\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.056173 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5eae0fad-6d53-49e3-bb12-1fcdbc604315-trusted-ca-bundle\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.056199 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82j88\" (UniqueName: \"kubernetes.io/projected/5eae0fad-6d53-49e3-bb12-1fcdbc604315-kube-api-access-82j88\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.056224 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5eae0fad-6d53-49e3-bb12-1fcdbc604315-service-ca\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.056262 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5eae0fad-6d53-49e3-bb12-1fcdbc604315-console-oauth-config\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.157869 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2af964c3-1de4-48af-a89c-df58527be8cb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.157924 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5eae0fad-6d53-49e3-bb12-1fcdbc604315-console-oauth-config\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.157948 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2af964c3-1de4-48af-a89c-df58527be8cb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.157982 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.158009 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-config\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.158033 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.158083 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5eae0fad-6d53-49e3-bb12-1fcdbc604315-oauth-serving-cert\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.158107 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5eae0fad-6d53-49e3-bb12-1fcdbc604315-console-serving-cert\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.158133 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5eae0fad-6d53-49e3-bb12-1fcdbc604315-console-config\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.158180 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2af964c3-1de4-48af-a89c-df58527be8cb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.158201 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9clp\" (UniqueName: \"kubernetes.io/projected/2af964c3-1de4-48af-a89c-df58527be8cb-kube-api-access-g9clp\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.158217 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5eae0fad-6d53-49e3-bb12-1fcdbc604315-trusted-ca-bundle\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.158238 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82j88\" (UniqueName: \"kubernetes.io/projected/5eae0fad-6d53-49e3-bb12-1fcdbc604315-kube-api-access-82j88\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.158259 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5eae0fad-6d53-49e3-bb12-1fcdbc604315-service-ca\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.158281 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d876c2f3-479b-491c-9733-a774bc11004d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d876c2f3-479b-491c-9733-a774bc11004d\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.163089 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5eae0fad-6d53-49e3-bb12-1fcdbc604315-console-oauth-config\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.163677 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5eae0fad-6d53-49e3-bb12-1fcdbc604315-oauth-serving-cert\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.166504 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5eae0fad-6d53-49e3-bb12-1fcdbc604315-console-config\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.167463 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5eae0fad-6d53-49e3-bb12-1fcdbc604315-trusted-ca-bundle\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.168217 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5eae0fad-6d53-49e3-bb12-1fcdbc604315-service-ca\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.170112 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5eae0fad-6d53-49e3-bb12-1fcdbc604315-console-serving-cert\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.210415 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82j88\" (UniqueName: \"kubernetes.io/projected/5eae0fad-6d53-49e3-bb12-1fcdbc604315-kube-api-access-82j88\") pod \"console-5c8f5f7d59-gzjgn\" (UID: \"5eae0fad-6d53-49e3-bb12-1fcdbc604315\") " pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.256196 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.260034 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2af964c3-1de4-48af-a89c-df58527be8cb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.260091 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9clp\" (UniqueName: \"kubernetes.io/projected/2af964c3-1de4-48af-a89c-df58527be8cb-kube-api-access-g9clp\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.260150 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d876c2f3-479b-491c-9733-a774bc11004d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d876c2f3-479b-491c-9733-a774bc11004d\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.260198 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2af964c3-1de4-48af-a89c-df58527be8cb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.260239 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2af964c3-1de4-48af-a89c-df58527be8cb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.260283 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.260317 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-config\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.260348 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.260429 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6df6ef32-ac48-4c52-9c23-95926cf8c67d-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-prnwq\" (UID: \"6df6ef32-ac48-4c52-9c23-95926cf8c67d\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-prnwq" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.261926 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2af964c3-1de4-48af-a89c-df58527be8cb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.263865 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2af964c3-1de4-48af-a89c-df58527be8cb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.265527 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.265572 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2af964c3-1de4-48af-a89c-df58527be8cb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.267479 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.267511 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d876c2f3-479b-491c-9733-a774bc11004d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d876c2f3-479b-491c-9733-a774bc11004d\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a4ae8a256b777150ec36228df485705416cca05b1e9dfc7938ea8781e1038e97/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.267543 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.268406 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-config\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.268885 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6df6ef32-ac48-4c52-9c23-95926cf8c67d-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-prnwq\" (UID: \"6df6ef32-ac48-4c52-9c23-95926cf8c67d\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-prnwq" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.288467 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9clp\" (UniqueName: \"kubernetes.io/projected/2af964c3-1de4-48af-a89c-df58527be8cb-kube-api-access-g9clp\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.327369 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d876c2f3-479b-491c-9733-a774bc11004d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d876c2f3-479b-491c-9733-a774bc11004d\") pod \"prometheus-metric-storage-0\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.376970 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 27 17:29:14 crc kubenswrapper[4792]: I1127 17:29:14.460194 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-prnwq" Nov 27 17:29:15 crc kubenswrapper[4792]: I1127 17:29:15.904364 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 27 17:29:15 crc kubenswrapper[4792]: I1127 17:29:15.907355 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:15 crc kubenswrapper[4792]: I1127 17:29:15.918455 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 27 17:29:15 crc kubenswrapper[4792]: I1127 17:29:15.918553 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 27 17:29:15 crc kubenswrapper[4792]: I1127 17:29:15.918791 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 27 17:29:15 crc kubenswrapper[4792]: I1127 17:29:15.918967 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 27 17:29:15 crc kubenswrapper[4792]: I1127 17:29:15.919317 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-7mn89" Nov 27 17:29:15 crc kubenswrapper[4792]: I1127 17:29:15.935658 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.008837 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b7e5347-cd23-498f-ac14-95ce8f106b97-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.008878 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzxnp\" (UniqueName: \"kubernetes.io/projected/3b7e5347-cd23-498f-ac14-95ce8f106b97-kube-api-access-zzxnp\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.008927 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7e5347-cd23-498f-ac14-95ce8f106b97-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.009072 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b7e5347-cd23-498f-ac14-95ce8f106b97-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.009313 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.009460 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b7e5347-cd23-498f-ac14-95ce8f106b97-config\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.009915 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7e5347-cd23-498f-ac14-95ce8f106b97-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.010005 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7e5347-cd23-498f-ac14-95ce8f106b97-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.111973 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b7e5347-cd23-498f-ac14-95ce8f106b97-config\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.112106 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7e5347-cd23-498f-ac14-95ce8f106b97-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.112197 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7e5347-cd23-498f-ac14-95ce8f106b97-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.112236 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b7e5347-cd23-498f-ac14-95ce8f106b97-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.112258 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzxnp\" (UniqueName: \"kubernetes.io/projected/3b7e5347-cd23-498f-ac14-95ce8f106b97-kube-api-access-zzxnp\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.112291 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7e5347-cd23-498f-ac14-95ce8f106b97-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.112325 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b7e5347-cd23-498f-ac14-95ce8f106b97-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.112383 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.112941 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.113106 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b7e5347-cd23-498f-ac14-95ce8f106b97-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.113145 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b7e5347-cd23-498f-ac14-95ce8f106b97-config\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.114434 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b7e5347-cd23-498f-ac14-95ce8f106b97-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.119727 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7e5347-cd23-498f-ac14-95ce8f106b97-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.124708 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7e5347-cd23-498f-ac14-95ce8f106b97-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.138048 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7e5347-cd23-498f-ac14-95ce8f106b97-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.146482 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzxnp\" (UniqueName: \"kubernetes.io/projected/3b7e5347-cd23-498f-ac14-95ce8f106b97-kube-api-access-zzxnp\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.149906 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3b7e5347-cd23-498f-ac14-95ce8f106b97\") " pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:16 crc kubenswrapper[4792]: I1127 17:29:16.234981 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.524803 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n2t2x"] Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.526425 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.528402 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.528716 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.535254 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-f2ncc" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.538506 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n2t2x"] Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.542059 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8445903e-bdf0-4581-a2ce-728410f878ac-var-run-ovn\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.542153 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8445903e-bdf0-4581-a2ce-728410f878ac-combined-ca-bundle\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.542241 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8445903e-bdf0-4581-a2ce-728410f878ac-scripts\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.542311 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8445903e-bdf0-4581-a2ce-728410f878ac-var-log-ovn\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.542364 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq5pp\" (UniqueName: \"kubernetes.io/projected/8445903e-bdf0-4581-a2ce-728410f878ac-kube-api-access-jq5pp\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.542390 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8445903e-bdf0-4581-a2ce-728410f878ac-ovn-controller-tls-certs\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.542422 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8445903e-bdf0-4581-a2ce-728410f878ac-var-run\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.642453 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-dzgvb"] Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.644147 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq5pp\" (UniqueName: \"kubernetes.io/projected/8445903e-bdf0-4581-a2ce-728410f878ac-kube-api-access-jq5pp\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.644195 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8445903e-bdf0-4581-a2ce-728410f878ac-ovn-controller-tls-certs\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.644235 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8445903e-bdf0-4581-a2ce-728410f878ac-var-run\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.644268 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8445903e-bdf0-4581-a2ce-728410f878ac-var-run-ovn\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.644309 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8445903e-bdf0-4581-a2ce-728410f878ac-combined-ca-bundle\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.644370 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8445903e-bdf0-4581-a2ce-728410f878ac-scripts\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.644406 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8445903e-bdf0-4581-a2ce-728410f878ac-var-log-ovn\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.644436 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.644970 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8445903e-bdf0-4581-a2ce-728410f878ac-var-log-ovn\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.645057 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8445903e-bdf0-4581-a2ce-728410f878ac-var-run\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.645104 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8445903e-bdf0-4581-a2ce-728410f878ac-var-run-ovn\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.647667 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8445903e-bdf0-4581-a2ce-728410f878ac-scripts\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.649336 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8445903e-bdf0-4581-a2ce-728410f878ac-ovn-controller-tls-certs\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.655488 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8445903e-bdf0-4581-a2ce-728410f878ac-combined-ca-bundle\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.659040 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dzgvb"] Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.696370 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq5pp\" (UniqueName: \"kubernetes.io/projected/8445903e-bdf0-4581-a2ce-728410f878ac-kube-api-access-jq5pp\") pod \"ovn-controller-n2t2x\" (UID: \"8445903e-bdf0-4581-a2ce-728410f878ac\") " pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.848780 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgljj\" (UniqueName: \"kubernetes.io/projected/a82ac7ae-1443-4fbc-a8bb-2383c148b809-kube-api-access-vgljj\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.848834 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a82ac7ae-1443-4fbc-a8bb-2383c148b809-var-lib\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.848868 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a82ac7ae-1443-4fbc-a8bb-2383c148b809-var-log\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.848904 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a82ac7ae-1443-4fbc-a8bb-2383c148b809-scripts\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.848940 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a82ac7ae-1443-4fbc-a8bb-2383c148b809-var-run\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.849013 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a82ac7ae-1443-4fbc-a8bb-2383c148b809-etc-ovs\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.863362 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.951144 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a82ac7ae-1443-4fbc-a8bb-2383c148b809-var-run\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.951325 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a82ac7ae-1443-4fbc-a8bb-2383c148b809-var-run\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.951505 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a82ac7ae-1443-4fbc-a8bb-2383c148b809-etc-ovs\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.951750 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a82ac7ae-1443-4fbc-a8bb-2383c148b809-etc-ovs\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.951957 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgljj\" (UniqueName: \"kubernetes.io/projected/a82ac7ae-1443-4fbc-a8bb-2383c148b809-kube-api-access-vgljj\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.952008 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a82ac7ae-1443-4fbc-a8bb-2383c148b809-var-lib\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.952043 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a82ac7ae-1443-4fbc-a8bb-2383c148b809-var-log\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.952110 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a82ac7ae-1443-4fbc-a8bb-2383c148b809-scripts\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.952252 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a82ac7ae-1443-4fbc-a8bb-2383c148b809-var-lib\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.952313 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a82ac7ae-1443-4fbc-a8bb-2383c148b809-var-log\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.955473 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a82ac7ae-1443-4fbc-a8bb-2383c148b809-scripts\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:17 crc kubenswrapper[4792]: I1127 17:29:17.967511 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgljj\" (UniqueName: \"kubernetes.io/projected/a82ac7ae-1443-4fbc-a8bb-2383c148b809-kube-api-access-vgljj\") pod \"ovn-controller-ovs-dzgvb\" (UID: \"a82ac7ae-1443-4fbc-a8bb-2383c148b809\") " pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:18 crc kubenswrapper[4792]: I1127 17:29:18.102991 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:18 crc kubenswrapper[4792]: W1127 17:29:18.317213 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d97d9a8_d646_4cd4_99a5_e0e5f5976f5b.slice/crio-3dfc7fc6dad05f753a6d50e4ac8e0137cf3ddc8dc1849e1ee924807fc64e883b WatchSource:0}: Error finding container 3dfc7fc6dad05f753a6d50e4ac8e0137cf3ddc8dc1849e1ee924807fc64e883b: Status 404 returned error can't find the container with id 3dfc7fc6dad05f753a6d50e4ac8e0137cf3ddc8dc1849e1ee924807fc64e883b Nov 27 17:29:18 crc kubenswrapper[4792]: I1127 17:29:18.363006 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b","Type":"ContainerStarted","Data":"3dfc7fc6dad05f753a6d50e4ac8e0137cf3ddc8dc1849e1ee924807fc64e883b"} Nov 27 17:29:19 crc kubenswrapper[4792]: W1127 17:29:19.035847 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4f2370d_a8cf_4b58_8cdb_2fcd03c5f666.slice/crio-936c75684bf7ab00e560749a7672332f8f71d0857d477b315b367ca4703d3eee WatchSource:0}: Error finding container 936c75684bf7ab00e560749a7672332f8f71d0857d477b315b367ca4703d3eee: Status 404 returned error can't find the container with id 936c75684bf7ab00e560749a7672332f8f71d0857d477b315b367ca4703d3eee Nov 27 17:29:19 crc kubenswrapper[4792]: I1127 17:29:19.371177 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666","Type":"ContainerStarted","Data":"936c75684bf7ab00e560749a7672332f8f71d0857d477b315b367ca4703d3eee"} Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.093911 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.095916 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.103198 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-x475q" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.103232 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.103347 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.103481 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.138311 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.205747 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.206834 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.206879 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.206941 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.206957 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.207060 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvh5v\" (UniqueName: \"kubernetes.io/projected/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-kube-api-access-pvh5v\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.207086 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.207132 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.308489 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.308562 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.308604 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.308633 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.308673 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.308719 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.308734 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.308810 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvh5v\" (UniqueName: \"kubernetes.io/projected/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-kube-api-access-pvh5v\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.309101 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.309509 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.310068 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.310676 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-config\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.316000 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.317760 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.329181 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.330037 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvh5v\" (UniqueName: \"kubernetes.io/projected/4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5-kube-api-access-pvh5v\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.346002 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5\") " pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:20 crc kubenswrapper[4792]: I1127 17:29:20.430570 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:28 crc kubenswrapper[4792]: E1127 17:29:28.499593 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 27 17:29:28 crc kubenswrapper[4792]: E1127 17:29:28.500555 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnlnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(dbbf8d9a-2069-4544-92db-ad5174339775): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:29:28 crc kubenswrapper[4792]: E1127 17:29:28.501848 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="dbbf8d9a-2069-4544-92db-ad5174339775" Nov 27 17:29:29 crc kubenswrapper[4792]: E1127 17:29:29.466708 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="dbbf8d9a-2069-4544-92db-ad5174339775" Nov 27 17:29:30 crc kubenswrapper[4792]: E1127 17:29:30.573611 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 27 17:29:30 crc kubenswrapper[4792]: E1127 17:29:30.574052 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bq4sm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(27d6022e-eea3-41e9-b880-620328dc5d78): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:29:30 crc kubenswrapper[4792]: E1127 17:29:30.575300 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="27d6022e-eea3-41e9-b880-620328dc5d78" Nov 27 17:29:30 crc kubenswrapper[4792]: E1127 17:29:30.576175 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 27 17:29:30 crc kubenswrapper[4792]: E1127 17:29:30.576309 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vnw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(8ed6358b-2030-436d-a847-724a53f802ea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:29:30 crc kubenswrapper[4792]: E1127 17:29:30.577738 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="8ed6358b-2030-436d-a847-724a53f802ea" Nov 27 17:29:31 crc kubenswrapper[4792]: I1127 17:29:31.048846 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-prnwq"] Nov 27 17:29:31 crc kubenswrapper[4792]: E1127 17:29:31.484803 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="27d6022e-eea3-41e9-b880-620328dc5d78" Nov 27 17:29:31 crc kubenswrapper[4792]: E1127 17:29:31.484819 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="8ed6358b-2030-436d-a847-724a53f802ea" Nov 27 17:29:31 crc kubenswrapper[4792]: E1127 17:29:31.619938 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 27 17:29:31 crc kubenswrapper[4792]: E1127 17:29:31.620122 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2b84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-f9qf7_openstack(ffc3ed2d-f1c6-405a-af02-ec63bc1929c9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:29:31 crc kubenswrapper[4792]: E1127 17:29:31.621205 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" podUID="ffc3ed2d-f1c6-405a-af02-ec63bc1929c9" Nov 27 17:29:31 crc kubenswrapper[4792]: E1127 17:29:31.644261 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 27 17:29:31 crc kubenswrapper[4792]: E1127 17:29:31.644424 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbjvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-z7gzw_openstack(37696559-360b-4fa3-94e5-253dff5f4b2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:29:31 crc kubenswrapper[4792]: E1127 17:29:31.645590 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-z7gzw" podUID="37696559-360b-4fa3-94e5-253dff5f4b2b" Nov 27 17:29:31 crc kubenswrapper[4792]: E1127 17:29:31.653361 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 27 17:29:31 crc kubenswrapper[4792]: E1127 17:29:31.653505 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-48q7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-znctf_openstack(8be51e3d-884a-49bc-a51e-98d57ad245df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:29:31 crc kubenswrapper[4792]: E1127 17:29:31.654697 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-znctf" podUID="8be51e3d-884a-49bc-a51e-98d57ad245df" Nov 27 17:29:31 crc kubenswrapper[4792]: E1127 17:29:31.661722 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 27 17:29:31 crc kubenswrapper[4792]: E1127 17:29:31.661879 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hc2jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-7qzwv_openstack(60ae1903-fc35-46be-8965-44bfa16135ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:29:31 crc kubenswrapper[4792]: E1127 17:29:31.663083 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" podUID="60ae1903-fc35-46be-8965-44bfa16135ba" Nov 27 17:29:32 crc kubenswrapper[4792]: W1127 17:29:32.238880 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6df6ef32_ac48_4c52_9c23_95926cf8c67d.slice/crio-9889b970550ffdf69513dc0a9bba13560d03b778428515c054ec06c8507036f4 WatchSource:0}: Error finding container 9889b970550ffdf69513dc0a9bba13560d03b778428515c054ec06c8507036f4: Status 404 returned error can't find the container with id 9889b970550ffdf69513dc0a9bba13560d03b778428515c054ec06c8507036f4 Nov 27 17:29:32 crc kubenswrapper[4792]: I1127 17:29:32.492733 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-prnwq" event={"ID":"6df6ef32-ac48-4c52-9c23-95926cf8c67d","Type":"ContainerStarted","Data":"9889b970550ffdf69513dc0a9bba13560d03b778428515c054ec06c8507036f4"} Nov 27 17:29:32 crc kubenswrapper[4792]: E1127 17:29:32.496030 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" podUID="60ae1903-fc35-46be-8965-44bfa16135ba" Nov 27 17:29:32 crc kubenswrapper[4792]: E1127 17:29:32.496392 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-znctf" podUID="8be51e3d-884a-49bc-a51e-98d57ad245df" Nov 27 17:29:32 crc kubenswrapper[4792]: I1127 17:29:32.988683 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.067704 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c8f5f7d59-gzjgn"] Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.266438 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-z7gzw" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.272934 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.367356 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37696559-360b-4fa3-94e5-253dff5f4b2b-config\") pod \"37696559-360b-4fa3-94e5-253dff5f4b2b\" (UID: \"37696559-360b-4fa3-94e5-253dff5f4b2b\") " Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.367499 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbjvg\" (UniqueName: \"kubernetes.io/projected/37696559-360b-4fa3-94e5-253dff5f4b2b-kube-api-access-mbjvg\") pod \"37696559-360b-4fa3-94e5-253dff5f4b2b\" (UID: \"37696559-360b-4fa3-94e5-253dff5f4b2b\") " Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.368096 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37696559-360b-4fa3-94e5-253dff5f4b2b-config" (OuterVolumeSpecName: "config") pod "37696559-360b-4fa3-94e5-253dff5f4b2b" (UID: "37696559-360b-4fa3-94e5-253dff5f4b2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.372977 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37696559-360b-4fa3-94e5-253dff5f4b2b-kube-api-access-mbjvg" (OuterVolumeSpecName: "kube-api-access-mbjvg") pod "37696559-360b-4fa3-94e5-253dff5f4b2b" (UID: "37696559-360b-4fa3-94e5-253dff5f4b2b"). InnerVolumeSpecName "kube-api-access-mbjvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.469099 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2b84\" (UniqueName: \"kubernetes.io/projected/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-kube-api-access-f2b84\") pod \"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9\" (UID: \"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9\") " Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.469229 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-config\") pod \"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9\" (UID: \"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9\") " Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.469611 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-config" (OuterVolumeSpecName: "config") pod "ffc3ed2d-f1c6-405a-af02-ec63bc1929c9" (UID: "ffc3ed2d-f1c6-405a-af02-ec63bc1929c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.469752 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-dns-svc\") pod \"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9\" (UID: \"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9\") " Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.470149 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ffc3ed2d-f1c6-405a-af02-ec63bc1929c9" (UID: "ffc3ed2d-f1c6-405a-af02-ec63bc1929c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.470485 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37696559-360b-4fa3-94e5-253dff5f4b2b-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.470504 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.470541 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.470555 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbjvg\" (UniqueName: \"kubernetes.io/projected/37696559-360b-4fa3-94e5-253dff5f4b2b-kube-api-access-mbjvg\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.473996 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-kube-api-access-f2b84" (OuterVolumeSpecName: "kube-api-access-f2b84") pod "ffc3ed2d-f1c6-405a-af02-ec63bc1929c9" (UID: "ffc3ed2d-f1c6-405a-af02-ec63bc1929c9"). InnerVolumeSpecName "kube-api-access-f2b84". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.502450 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1ba0cf2f-cd7d-4133-9746-61abf95e4420","Type":"ContainerStarted","Data":"bbcd885c50230f2bc98df687efbed6a210526e07f0853f3033e2512948561c50"} Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.503851 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-z7gzw" event={"ID":"37696559-360b-4fa3-94e5-253dff5f4b2b","Type":"ContainerDied","Data":"d637df564cb1e9418633d5ec617e8ffdfd308f1ac5d0a5f39f066b852cb2f8ba"} Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.503934 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-z7gzw" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.514363 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b","Type":"ContainerStarted","Data":"6c8fd7eaabd963dc0287916aed456a40d6bd01038295376fc869c3b82b15d5d9"} Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.519920 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c8f5f7d59-gzjgn" event={"ID":"5eae0fad-6d53-49e3-bb12-1fcdbc604315","Type":"ContainerStarted","Data":"326f0688cff288e188093c2b5877ebfc2f426f39f7528621dd5f8117738bef36"} Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.519972 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c8f5f7d59-gzjgn" event={"ID":"5eae0fad-6d53-49e3-bb12-1fcdbc604315","Type":"ContainerStarted","Data":"f2378ded9862b91777cdc06027b706c600670c8576727686481e3130f53c5fd5"} Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.530563 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" event={"ID":"ffc3ed2d-f1c6-405a-af02-ec63bc1929c9","Type":"ContainerDied","Data":"8accaf555eea921cf0285bd094d4a78113416a7936b39e79c81971b67d476cb7"} Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.530708 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-f9qf7" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.532496 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666","Type":"ContainerStarted","Data":"2843288704abfad1ef1cbbae56153ac099812b023e1b1fb63c6d14742017641a"} Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.532769 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.577565 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2b84\" (UniqueName: \"kubernetes.io/projected/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9-kube-api-access-f2b84\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.632780 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.662580 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5c8f5f7d59-gzjgn" podStartSLOduration=20.662563814 podStartE2EDuration="20.662563814s" podCreationTimestamp="2025-11-27 17:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:29:33.632525415 +0000 UTC m=+1195.975351743" watchObservedRunningTime="2025-11-27 17:29:33.662563814 +0000 UTC m=+1196.005390122" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.666248 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n2t2x"] Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.769843 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z7gzw"] Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.795175 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z7gzw"] Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.804682 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=10.433311273 podStartE2EDuration="23.804664533s" podCreationTimestamp="2025-11-27 17:29:10 +0000 UTC" firstStartedPulling="2025-11-27 17:29:19.039879553 +0000 UTC m=+1181.382705871" lastFinishedPulling="2025-11-27 17:29:32.411232813 +0000 UTC m=+1194.754059131" observedRunningTime="2025-11-27 17:29:33.751854658 +0000 UTC m=+1196.094680976" watchObservedRunningTime="2025-11-27 17:29:33.804664533 +0000 UTC m=+1196.147490851" Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.836581 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-f9qf7"] Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.836656 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-f9qf7"] Nov 27 17:29:33 crc kubenswrapper[4792]: I1127 17:29:33.842460 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 27 17:29:33 crc kubenswrapper[4792]: W1127 17:29:33.945826 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4708b49f_1bdd_4ef7_8b3e_9d572f4a2cb5.slice/crio-f09ec0efa321be17d288b2c197c37d0779bb3465b5eb60ca541d89232cd74db3 WatchSource:0}: Error finding container f09ec0efa321be17d288b2c197c37d0779bb3465b5eb60ca541d89232cd74db3: Status 404 returned error can't find the container with id f09ec0efa321be17d288b2c197c37d0779bb3465b5eb60ca541d89232cd74db3 Nov 27 17:29:34 crc kubenswrapper[4792]: I1127 17:29:34.257209 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:34 crc kubenswrapper[4792]: I1127 17:29:34.257268 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:34 crc kubenswrapper[4792]: I1127 17:29:34.261995 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:34 crc kubenswrapper[4792]: I1127 17:29:34.347009 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 27 17:29:34 crc kubenswrapper[4792]: I1127 17:29:34.547583 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5","Type":"ContainerStarted","Data":"f09ec0efa321be17d288b2c197c37d0779bb3465b5eb60ca541d89232cd74db3"} Nov 27 17:29:34 crc kubenswrapper[4792]: I1127 17:29:34.550889 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2af964c3-1de4-48af-a89c-df58527be8cb","Type":"ContainerStarted","Data":"0bb19d748aa06724e43f18b4535e522dd934beda1f393988028662634d95e4f5"} Nov 27 17:29:34 crc kubenswrapper[4792]: I1127 17:29:34.552484 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n2t2x" event={"ID":"8445903e-bdf0-4581-a2ce-728410f878ac","Type":"ContainerStarted","Data":"c62f35e0f33f493c836fe9df451fb047be59e07435a092bb0dbb6359cb4157a4"} Nov 27 17:29:34 crc kubenswrapper[4792]: I1127 17:29:34.556702 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5c8f5f7d59-gzjgn" Nov 27 17:29:34 crc kubenswrapper[4792]: I1127 17:29:34.625789 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dzgvb"] Nov 27 17:29:34 crc kubenswrapper[4792]: I1127 17:29:34.637998 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-549b454695-9zzgx"] Nov 27 17:29:34 crc kubenswrapper[4792]: I1127 17:29:34.703950 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37696559-360b-4fa3-94e5-253dff5f4b2b" path="/var/lib/kubelet/pods/37696559-360b-4fa3-94e5-253dff5f4b2b/volumes" Nov 27 17:29:34 crc kubenswrapper[4792]: I1127 17:29:34.704595 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc3ed2d-f1c6-405a-af02-ec63bc1929c9" path="/var/lib/kubelet/pods/ffc3ed2d-f1c6-405a-af02-ec63bc1929c9/volumes" Nov 27 17:29:34 crc kubenswrapper[4792]: W1127 17:29:34.940928 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b7e5347_cd23_498f_ac14_95ce8f106b97.slice/crio-efd1c358fcd171dad577e7540bcea9595649d666be76482e201ea569bc2eb7b4 WatchSource:0}: Error finding container efd1c358fcd171dad577e7540bcea9595649d666be76482e201ea569bc2eb7b4: Status 404 returned error can't find the container with id efd1c358fcd171dad577e7540bcea9595649d666be76482e201ea569bc2eb7b4 Nov 27 17:29:35 crc kubenswrapper[4792]: I1127 17:29:35.561947 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dzgvb" event={"ID":"a82ac7ae-1443-4fbc-a8bb-2383c148b809","Type":"ContainerStarted","Data":"6847141df6c87509073ad6db4dacd8dc8afff6cb5cf83e2b31d693e324d25a4a"} Nov 27 17:29:35 crc kubenswrapper[4792]: I1127 17:29:35.562984 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3b7e5347-cd23-498f-ac14-95ce8f106b97","Type":"ContainerStarted","Data":"efd1c358fcd171dad577e7540bcea9595649d666be76482e201ea569bc2eb7b4"} Nov 27 17:29:36 crc kubenswrapper[4792]: I1127 17:29:36.579726 4792 generic.go:334] "Generic (PLEG): container finished" podID="1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b" containerID="6c8fd7eaabd963dc0287916aed456a40d6bd01038295376fc869c3b82b15d5d9" exitCode=0 Nov 27 17:29:36 crc kubenswrapper[4792]: I1127 17:29:36.579802 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b","Type":"ContainerDied","Data":"6c8fd7eaabd963dc0287916aed456a40d6bd01038295376fc869c3b82b15d5d9"} Nov 27 17:29:41 crc kubenswrapper[4792]: I1127 17:29:41.189378 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 27 17:29:41 crc kubenswrapper[4792]: I1127 17:29:41.664360 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1ba0cf2f-cd7d-4133-9746-61abf95e4420","Type":"ContainerStarted","Data":"ee77d21d98990c8ef21b29587f255577c0c94058f424026a0c7ec8fd34c2522a"} Nov 27 17:29:41 crc kubenswrapper[4792]: I1127 17:29:41.664533 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 27 17:29:41 crc kubenswrapper[4792]: I1127 17:29:41.666427 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5","Type":"ContainerStarted","Data":"e46186f599b8528eac80fd0be9759e50b9980c83e00dabc7c5804ceab52f0185"} Nov 27 17:29:41 crc kubenswrapper[4792]: I1127 17:29:41.668597 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b","Type":"ContainerStarted","Data":"f14ec7d20c38c0b8d6d8a52500e6210433abc51f0e150b397ca2568f453385d6"} Nov 27 17:29:41 crc kubenswrapper[4792]: I1127 17:29:41.669946 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3b7e5347-cd23-498f-ac14-95ce8f106b97","Type":"ContainerStarted","Data":"09ab12d23f92d2f9f46085114c2cfe2c400f282893b5c7f0f5f73d33dffef8c1"} Nov 27 17:29:41 crc kubenswrapper[4792]: I1127 17:29:41.671370 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n2t2x" event={"ID":"8445903e-bdf0-4581-a2ce-728410f878ac","Type":"ContainerStarted","Data":"fb47917a5c9f01924dd8d933c86a5d716871dfae35f17e5efbd80ca5e30bf33d"} Nov 27 17:29:41 crc kubenswrapper[4792]: I1127 17:29:41.672208 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-n2t2x" Nov 27 17:29:41 crc kubenswrapper[4792]: I1127 17:29:41.673504 4792 generic.go:334] "Generic (PLEG): container finished" podID="a82ac7ae-1443-4fbc-a8bb-2383c148b809" containerID="93845b6e4542dfe29d6d4548a375fd1f2df38dcd71e7d6ab64c5fc4059b9eb22" exitCode=0 Nov 27 17:29:41 crc kubenswrapper[4792]: I1127 17:29:41.673544 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dzgvb" event={"ID":"a82ac7ae-1443-4fbc-a8bb-2383c148b809","Type":"ContainerDied","Data":"93845b6e4542dfe29d6d4548a375fd1f2df38dcd71e7d6ab64c5fc4059b9eb22"} Nov 27 17:29:41 crc kubenswrapper[4792]: I1127 17:29:41.675038 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-prnwq" event={"ID":"6df6ef32-ac48-4c52-9c23-95926cf8c67d","Type":"ContainerStarted","Data":"e10e6abab0afa9f6b6fbb43f7e5ea84fd7907f81bf35c17aeff46c38e49d9280"} Nov 27 17:29:41 crc kubenswrapper[4792]: I1127 17:29:41.687034 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=22.183581893 podStartE2EDuration="29.687010988s" podCreationTimestamp="2025-11-27 17:29:12 +0000 UTC" firstStartedPulling="2025-11-27 17:29:33.019047444 +0000 UTC m=+1195.361873762" lastFinishedPulling="2025-11-27 17:29:40.522476539 +0000 UTC m=+1202.865302857" observedRunningTime="2025-11-27 17:29:41.683424509 +0000 UTC m=+1204.026250827" watchObservedRunningTime="2025-11-27 17:29:41.687010988 +0000 UTC m=+1204.029837306" Nov 27 17:29:41 crc kubenswrapper[4792]: I1127 17:29:41.767095 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=18.781497813 podStartE2EDuration="32.767073023s" podCreationTimestamp="2025-11-27 17:29:09 +0000 UTC" firstStartedPulling="2025-11-27 17:29:18.327872377 +0000 UTC m=+1180.670698705" lastFinishedPulling="2025-11-27 17:29:32.313447557 +0000 UTC m=+1194.656273915" observedRunningTime="2025-11-27 17:29:41.759190426 +0000 UTC m=+1204.102016744" watchObservedRunningTime="2025-11-27 17:29:41.767073023 +0000 UTC m=+1204.109899351" Nov 27 17:29:41 crc kubenswrapper[4792]: I1127 17:29:41.787264 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-prnwq" podStartSLOduration=25.762479297 podStartE2EDuration="28.787242055s" podCreationTimestamp="2025-11-27 17:29:13 +0000 UTC" firstStartedPulling="2025-11-27 17:29:32.244995302 +0000 UTC m=+1194.587821660" lastFinishedPulling="2025-11-27 17:29:35.26975808 +0000 UTC m=+1197.612584418" observedRunningTime="2025-11-27 17:29:41.779553674 +0000 UTC m=+1204.122379992" watchObservedRunningTime="2025-11-27 17:29:41.787242055 +0000 UTC m=+1204.130068373" Nov 27 17:29:41 crc kubenswrapper[4792]: I1127 17:29:41.852332 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-n2t2x" podStartSLOduration=17.983637172999998 podStartE2EDuration="24.852311676s" podCreationTimestamp="2025-11-27 17:29:17 +0000 UTC" firstStartedPulling="2025-11-27 17:29:33.653795845 +0000 UTC m=+1195.996622163" lastFinishedPulling="2025-11-27 17:29:40.522470348 +0000 UTC m=+1202.865296666" observedRunningTime="2025-11-27 17:29:41.823610891 +0000 UTC m=+1204.166437209" watchObservedRunningTime="2025-11-27 17:29:41.852311676 +0000 UTC m=+1204.195137994" Nov 27 17:29:42 crc kubenswrapper[4792]: I1127 17:29:42.717177 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dzgvb" event={"ID":"a82ac7ae-1443-4fbc-a8bb-2383c148b809","Type":"ContainerStarted","Data":"375413f4c52d936bb38f4330548c103013772ea23455a802271bd2056db1c010"} Nov 27 17:29:42 crc kubenswrapper[4792]: I1127 17:29:42.717438 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dzgvb" event={"ID":"a82ac7ae-1443-4fbc-a8bb-2383c148b809","Type":"ContainerStarted","Data":"8b48cfabb91ba9d7014d5f35072b24bf9fb52605aaa3a3aaea866e7caaf02094"} Nov 27 17:29:42 crc kubenswrapper[4792]: I1127 17:29:42.718320 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:42 crc kubenswrapper[4792]: I1127 17:29:42.718349 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:29:42 crc kubenswrapper[4792]: I1127 17:29:42.752317 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-dzgvb" podStartSLOduration=20.20501468 podStartE2EDuration="25.752293766s" podCreationTimestamp="2025-11-27 17:29:17 +0000 UTC" firstStartedPulling="2025-11-27 17:29:34.954999259 +0000 UTC m=+1197.297825587" lastFinishedPulling="2025-11-27 17:29:40.502278345 +0000 UTC m=+1202.845104673" observedRunningTime="2025-11-27 17:29:42.744108732 +0000 UTC m=+1205.086935050" watchObservedRunningTime="2025-11-27 17:29:42.752293766 +0000 UTC m=+1205.095120084" Nov 27 17:29:43 crc kubenswrapper[4792]: I1127 17:29:43.073426 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-7qzwv"] Nov 27 17:29:43 crc kubenswrapper[4792]: I1127 17:29:43.110042 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-2dtqz"] Nov 27 17:29:43 crc kubenswrapper[4792]: I1127 17:29:43.112045 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" Nov 27 17:29:43 crc kubenswrapper[4792]: I1127 17:29:43.138887 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-2dtqz"] Nov 27 17:29:43 crc kubenswrapper[4792]: I1127 17:29:43.254482 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drp8k\" (UniqueName: \"kubernetes.io/projected/673c6f91-6293-4630-a292-cf2a94e2e483-kube-api-access-drp8k\") pod \"dnsmasq-dns-7cb5889db5-2dtqz\" (UID: \"673c6f91-6293-4630-a292-cf2a94e2e483\") " pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" Nov 27 17:29:43 crc kubenswrapper[4792]: I1127 17:29:43.254815 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/673c6f91-6293-4630-a292-cf2a94e2e483-config\") pod \"dnsmasq-dns-7cb5889db5-2dtqz\" (UID: \"673c6f91-6293-4630-a292-cf2a94e2e483\") " pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" Nov 27 17:29:43 crc kubenswrapper[4792]: I1127 17:29:43.254863 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/673c6f91-6293-4630-a292-cf2a94e2e483-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-2dtqz\" (UID: \"673c6f91-6293-4630-a292-cf2a94e2e483\") " pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" Nov 27 17:29:43 crc kubenswrapper[4792]: I1127 17:29:43.358059 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drp8k\" (UniqueName: \"kubernetes.io/projected/673c6f91-6293-4630-a292-cf2a94e2e483-kube-api-access-drp8k\") pod \"dnsmasq-dns-7cb5889db5-2dtqz\" (UID: \"673c6f91-6293-4630-a292-cf2a94e2e483\") " pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" Nov 27 17:29:43 crc kubenswrapper[4792]: I1127 17:29:43.358244 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/673c6f91-6293-4630-a292-cf2a94e2e483-config\") pod \"dnsmasq-dns-7cb5889db5-2dtqz\" (UID: \"673c6f91-6293-4630-a292-cf2a94e2e483\") " pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" Nov 27 17:29:43 crc kubenswrapper[4792]: I1127 17:29:43.358273 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/673c6f91-6293-4630-a292-cf2a94e2e483-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-2dtqz\" (UID: \"673c6f91-6293-4630-a292-cf2a94e2e483\") " pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" Nov 27 17:29:43 crc kubenswrapper[4792]: I1127 17:29:43.361072 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/673c6f91-6293-4630-a292-cf2a94e2e483-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-2dtqz\" (UID: \"673c6f91-6293-4630-a292-cf2a94e2e483\") " pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" Nov 27 17:29:43 crc kubenswrapper[4792]: I1127 17:29:43.362050 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/673c6f91-6293-4630-a292-cf2a94e2e483-config\") pod \"dnsmasq-dns-7cb5889db5-2dtqz\" (UID: \"673c6f91-6293-4630-a292-cf2a94e2e483\") " pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" Nov 27 17:29:43 crc kubenswrapper[4792]: I1127 17:29:43.397135 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drp8k\" (UniqueName: \"kubernetes.io/projected/673c6f91-6293-4630-a292-cf2a94e2e483-kube-api-access-drp8k\") pod \"dnsmasq-dns-7cb5889db5-2dtqz\" (UID: \"673c6f91-6293-4630-a292-cf2a94e2e483\") " pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" Nov 27 17:29:43 crc kubenswrapper[4792]: I1127 17:29:43.439379 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" Nov 27 17:29:43 crc kubenswrapper[4792]: I1127 17:29:43.727782 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2af964c3-1de4-48af-a89c-df58527be8cb","Type":"ContainerStarted","Data":"e6fbace0c614bc0bdf93c06b148404d8b6e1595e04e37259615b4e291b32bc89"} Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.211763 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.239964 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.240187 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.243181 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.243190 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-ts2jm" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.243786 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.247710 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.377823 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b7d63be6-0f2b-4b86-abec-4576d23792a9-cache\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.377879 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b7d63be6-0f2b-4b86-abec-4576d23792a9-lock\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.377954 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x4mz\" (UniqueName: \"kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-kube-api-access-2x4mz\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.377993 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.378035 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.479434 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b7d63be6-0f2b-4b86-abec-4576d23792a9-lock\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.479525 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x4mz\" (UniqueName: \"kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-kube-api-access-2x4mz\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.479548 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.479579 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.479698 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b7d63be6-0f2b-4b86-abec-4576d23792a9-cache\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: E1127 17:29:44.479783 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 27 17:29:44 crc kubenswrapper[4792]: E1127 17:29:44.479815 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 27 17:29:44 crc kubenswrapper[4792]: E1127 17:29:44.479867 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift podName:b7d63be6-0f2b-4b86-abec-4576d23792a9 nodeName:}" failed. No retries permitted until 2025-11-27 17:29:44.97984894 +0000 UTC m=+1207.322675258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift") pod "swift-storage-0" (UID: "b7d63be6-0f2b-4b86-abec-4576d23792a9") : configmap "swift-ring-files" not found Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.480020 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b7d63be6-0f2b-4b86-abec-4576d23792a9-lock\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.480095 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b7d63be6-0f2b-4b86-abec-4576d23792a9-cache\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.480507 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.502586 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x4mz\" (UniqueName: \"kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-kube-api-access-2x4mz\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.508412 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.867472 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-z7zwq"] Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.876081 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.880948 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.915347 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-z7zwq"] Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.993482 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/88e47e4b-d7fb-4dfc-8352-9705403282a6-ovs-rundir\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.993538 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw6zs\" (UniqueName: \"kubernetes.io/projected/88e47e4b-d7fb-4dfc-8352-9705403282a6-kube-api-access-nw6zs\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.993575 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/88e47e4b-d7fb-4dfc-8352-9705403282a6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.993598 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/88e47e4b-d7fb-4dfc-8352-9705403282a6-ovn-rundir\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.993632 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.993705 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e47e4b-d7fb-4dfc-8352-9705403282a6-combined-ca-bundle\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:44 crc kubenswrapper[4792]: I1127 17:29:44.993743 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e47e4b-d7fb-4dfc-8352-9705403282a6-config\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:44 crc kubenswrapper[4792]: E1127 17:29:44.994062 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 27 17:29:44 crc kubenswrapper[4792]: E1127 17:29:44.994079 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 27 17:29:44 crc kubenswrapper[4792]: E1127 17:29:44.994123 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift podName:b7d63be6-0f2b-4b86-abec-4576d23792a9 nodeName:}" failed. No retries permitted until 2025-11-27 17:29:45.99410835 +0000 UTC m=+1208.336934668 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift") pod "swift-storage-0" (UID: "b7d63be6-0f2b-4b86-abec-4576d23792a9") : configmap "swift-ring-files" not found Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.096083 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e47e4b-d7fb-4dfc-8352-9705403282a6-combined-ca-bundle\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.096151 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e47e4b-d7fb-4dfc-8352-9705403282a6-config\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.096256 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/88e47e4b-d7fb-4dfc-8352-9705403282a6-ovs-rundir\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.096551 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/88e47e4b-d7fb-4dfc-8352-9705403282a6-ovs-rundir\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.096293 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw6zs\" (UniqueName: \"kubernetes.io/projected/88e47e4b-d7fb-4dfc-8352-9705403282a6-kube-api-access-nw6zs\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.096880 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e47e4b-d7fb-4dfc-8352-9705403282a6-config\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.097158 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/88e47e4b-d7fb-4dfc-8352-9705403282a6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.097204 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/88e47e4b-d7fb-4dfc-8352-9705403282a6-ovn-rundir\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.097535 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/88e47e4b-d7fb-4dfc-8352-9705403282a6-ovn-rundir\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.108979 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e47e4b-d7fb-4dfc-8352-9705403282a6-combined-ca-bundle\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.113116 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw6zs\" (UniqueName: \"kubernetes.io/projected/88e47e4b-d7fb-4dfc-8352-9705403282a6-kube-api-access-nw6zs\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.118938 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/88e47e4b-d7fb-4dfc-8352-9705403282a6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-z7zwq\" (UID: \"88e47e4b-d7fb-4dfc-8352-9705403282a6\") " pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.142834 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-znctf"] Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.201739 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5pwzl"] Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.203802 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.206770 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.208998 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5pwzl"] Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.222620 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-z7zwq" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.306435 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-dns-svc\") pod \"dnsmasq-dns-57d65f699f-5pwzl\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.306486 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t28dt\" (UniqueName: \"kubernetes.io/projected/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-kube-api-access-t28dt\") pod \"dnsmasq-dns-57d65f699f-5pwzl\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.306570 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-5pwzl\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.306833 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-config\") pod \"dnsmasq-dns-57d65f699f-5pwzl\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.414104 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-5pwzl\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.414266 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-config\") pod \"dnsmasq-dns-57d65f699f-5pwzl\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.414360 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-dns-svc\") pod \"dnsmasq-dns-57d65f699f-5pwzl\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.414380 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t28dt\" (UniqueName: \"kubernetes.io/projected/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-kube-api-access-t28dt\") pod \"dnsmasq-dns-57d65f699f-5pwzl\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.416000 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-5pwzl\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.416660 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-config\") pod \"dnsmasq-dns-57d65f699f-5pwzl\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.417103 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-dns-svc\") pod \"dnsmasq-dns-57d65f699f-5pwzl\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.419776 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-2dtqz"] Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.436047 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t28dt\" (UniqueName: \"kubernetes.io/projected/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-kube-api-access-t28dt\") pod \"dnsmasq-dns-57d65f699f-5pwzl\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.451331 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b64n6"] Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.452827 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.458628 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.463767 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b64n6"] Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.523413 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.619849 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvgz4\" (UniqueName: \"kubernetes.io/projected/87ce06aa-1c03-4047-b4c8-36a610c07218-kube-api-access-xvgz4\") pod \"dnsmasq-dns-b8fbc5445-b64n6\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.619943 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-b64n6\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.619987 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-b64n6\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.620516 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-config\") pod \"dnsmasq-dns-b8fbc5445-b64n6\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.620613 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-b64n6\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.722599 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-config\") pod \"dnsmasq-dns-b8fbc5445-b64n6\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.722693 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-b64n6\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.722876 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvgz4\" (UniqueName: \"kubernetes.io/projected/87ce06aa-1c03-4047-b4c8-36a610c07218-kube-api-access-xvgz4\") pod \"dnsmasq-dns-b8fbc5445-b64n6\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.722918 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-b64n6\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.722953 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-b64n6\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.724138 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-b64n6\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.725454 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-b64n6\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.725748 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-b64n6\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.727390 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-config\") pod \"dnsmasq-dns-b8fbc5445-b64n6\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.748721 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvgz4\" (UniqueName: \"kubernetes.io/projected/87ce06aa-1c03-4047-b4c8-36a610c07218-kube-api-access-xvgz4\") pod \"dnsmasq-dns-b8fbc5445-b64n6\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.756470 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" event={"ID":"60ae1903-fc35-46be-8965-44bfa16135ba","Type":"ContainerDied","Data":"d4333a9308db5beb263d15b051920f9c7cf00e92823fb8fc69457d544a3df777"} Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.756509 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4333a9308db5beb263d15b051920f9c7cf00e92823fb8fc69457d544a3df777" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.806196 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.819245 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.926614 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60ae1903-fc35-46be-8965-44bfa16135ba-dns-svc\") pod \"60ae1903-fc35-46be-8965-44bfa16135ba\" (UID: \"60ae1903-fc35-46be-8965-44bfa16135ba\") " Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.926807 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc2jh\" (UniqueName: \"kubernetes.io/projected/60ae1903-fc35-46be-8965-44bfa16135ba-kube-api-access-hc2jh\") pod \"60ae1903-fc35-46be-8965-44bfa16135ba\" (UID: \"60ae1903-fc35-46be-8965-44bfa16135ba\") " Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.926830 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60ae1903-fc35-46be-8965-44bfa16135ba-config\") pod \"60ae1903-fc35-46be-8965-44bfa16135ba\" (UID: \"60ae1903-fc35-46be-8965-44bfa16135ba\") " Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.927896 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ae1903-fc35-46be-8965-44bfa16135ba-config" (OuterVolumeSpecName: "config") pod "60ae1903-fc35-46be-8965-44bfa16135ba" (UID: "60ae1903-fc35-46be-8965-44bfa16135ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.928167 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ae1903-fc35-46be-8965-44bfa16135ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "60ae1903-fc35-46be-8965-44bfa16135ba" (UID: "60ae1903-fc35-46be-8965-44bfa16135ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:29:45 crc kubenswrapper[4792]: I1127 17:29:45.930863 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ae1903-fc35-46be-8965-44bfa16135ba-kube-api-access-hc2jh" (OuterVolumeSpecName: "kube-api-access-hc2jh") pod "60ae1903-fc35-46be-8965-44bfa16135ba" (UID: "60ae1903-fc35-46be-8965-44bfa16135ba"). InnerVolumeSpecName "kube-api-access-hc2jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:29:46 crc kubenswrapper[4792]: I1127 17:29:46.028872 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:46 crc kubenswrapper[4792]: I1127 17:29:46.029080 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60ae1903-fc35-46be-8965-44bfa16135ba-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:46 crc kubenswrapper[4792]: I1127 17:29:46.029095 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60ae1903-fc35-46be-8965-44bfa16135ba-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:46 crc kubenswrapper[4792]: I1127 17:29:46.029113 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc2jh\" (UniqueName: \"kubernetes.io/projected/60ae1903-fc35-46be-8965-44bfa16135ba-kube-api-access-hc2jh\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:46 crc kubenswrapper[4792]: E1127 17:29:46.029088 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 27 17:29:46 crc kubenswrapper[4792]: E1127 17:29:46.029141 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 27 17:29:46 crc kubenswrapper[4792]: E1127 17:29:46.029189 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift podName:b7d63be6-0f2b-4b86-abec-4576d23792a9 nodeName:}" failed. No retries permitted until 2025-11-27 17:29:48.029172315 +0000 UTC m=+1210.371998633 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift") pod "swift-storage-0" (UID: "b7d63be6-0f2b-4b86-abec-4576d23792a9") : configmap "swift-ring-files" not found Nov 27 17:29:46 crc kubenswrapper[4792]: I1127 17:29:46.763714 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-7qzwv" Nov 27 17:29:46 crc kubenswrapper[4792]: I1127 17:29:46.824370 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-7qzwv"] Nov 27 17:29:46 crc kubenswrapper[4792]: I1127 17:29:46.832189 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-7qzwv"] Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.068505 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:48 crc kubenswrapper[4792]: E1127 17:29:48.068890 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 27 17:29:48 crc kubenswrapper[4792]: E1127 17:29:48.068938 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 27 17:29:48 crc kubenswrapper[4792]: E1127 17:29:48.069032 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift podName:b7d63be6-0f2b-4b86-abec-4576d23792a9 nodeName:}" failed. No retries permitted until 2025-11-27 17:29:52.069004788 +0000 UTC m=+1214.411831146 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift") pod "swift-storage-0" (UID: "b7d63be6-0f2b-4b86-abec-4576d23792a9") : configmap "swift-ring-files" not found Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.174148 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-s5klr"] Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.177414 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.186188 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.186348 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.190078 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-s5klr"] Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.191371 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.220887 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-2n56v"] Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.222938 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.246483 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2n56v"] Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.266778 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-s5klr"] Nov 27 17:29:48 crc kubenswrapper[4792]: E1127 17:29:48.267526 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-492f9 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-s5klr" podUID="0a4e00fd-8a47-46da-aa39-8e26856f4816" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.274584 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-swiftconf\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.274861 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a4e00fd-8a47-46da-aa39-8e26856f4816-scripts\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.275122 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-492f9\" (UniqueName: \"kubernetes.io/projected/0a4e00fd-8a47-46da-aa39-8e26856f4816-kube-api-access-492f9\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.275157 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-combined-ca-bundle\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.275177 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0a4e00fd-8a47-46da-aa39-8e26856f4816-etc-swift\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.275415 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0a4e00fd-8a47-46da-aa39-8e26856f4816-ring-data-devices\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.275439 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-dispersionconf\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.376832 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/66a7953b-06d4-453f-801c-4873d0d43c7a-etc-swift\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.376923 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0a4e00fd-8a47-46da-aa39-8e26856f4816-ring-data-devices\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.377010 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/66a7953b-06d4-453f-801c-4873d0d43c7a-ring-data-devices\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.377078 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-dispersionconf\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.377265 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6pr5\" (UniqueName: \"kubernetes.io/projected/66a7953b-06d4-453f-801c-4873d0d43c7a-kube-api-access-v6pr5\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.377305 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a7953b-06d4-453f-801c-4873d0d43c7a-scripts\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.377325 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-swiftconf\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.377433 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a4e00fd-8a47-46da-aa39-8e26856f4816-scripts\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.377577 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-dispersionconf\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.377681 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0a4e00fd-8a47-46da-aa39-8e26856f4816-ring-data-devices\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.377757 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-492f9\" (UniqueName: \"kubernetes.io/projected/0a4e00fd-8a47-46da-aa39-8e26856f4816-kube-api-access-492f9\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.377841 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-swiftconf\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.377905 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-combined-ca-bundle\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.377939 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0a4e00fd-8a47-46da-aa39-8e26856f4816-etc-swift\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.377997 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-combined-ca-bundle\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.378083 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a4e00fd-8a47-46da-aa39-8e26856f4816-scripts\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.378317 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0a4e00fd-8a47-46da-aa39-8e26856f4816-etc-swift\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.384019 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-dispersionconf\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.384049 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-swiftconf\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.384328 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-combined-ca-bundle\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.394839 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-492f9\" (UniqueName: \"kubernetes.io/projected/0a4e00fd-8a47-46da-aa39-8e26856f4816-kube-api-access-492f9\") pod \"swift-ring-rebalance-s5klr\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.481273 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-dispersionconf\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.481402 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-swiftconf\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.481464 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-combined-ca-bundle\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.481591 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/66a7953b-06d4-453f-801c-4873d0d43c7a-etc-swift\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.481631 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/66a7953b-06d4-453f-801c-4873d0d43c7a-ring-data-devices\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.482295 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6pr5\" (UniqueName: \"kubernetes.io/projected/66a7953b-06d4-453f-801c-4873d0d43c7a-kube-api-access-v6pr5\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.483022 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a7953b-06d4-453f-801c-4873d0d43c7a-scripts\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.482789 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/66a7953b-06d4-453f-801c-4873d0d43c7a-ring-data-devices\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.482214 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/66a7953b-06d4-453f-801c-4873d0d43c7a-etc-swift\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.483656 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a7953b-06d4-453f-801c-4873d0d43c7a-scripts\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.484689 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-swiftconf\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.486094 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-dispersionconf\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.490376 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-combined-ca-bundle\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.497525 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6pr5\" (UniqueName: \"kubernetes.io/projected/66a7953b-06d4-453f-801c-4873d0d43c7a-kube-api-access-v6pr5\") pod \"swift-ring-rebalance-2n56v\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.565240 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.701523 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ae1903-fc35-46be-8965-44bfa16135ba" path="/var/lib/kubelet/pods/60ae1903-fc35-46be-8965-44bfa16135ba/volumes" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.785212 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.801398 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.891309 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-swiftconf\") pod \"0a4e00fd-8a47-46da-aa39-8e26856f4816\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.891456 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-492f9\" (UniqueName: \"kubernetes.io/projected/0a4e00fd-8a47-46da-aa39-8e26856f4816-kube-api-access-492f9\") pod \"0a4e00fd-8a47-46da-aa39-8e26856f4816\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.891512 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0a4e00fd-8a47-46da-aa39-8e26856f4816-etc-swift\") pod \"0a4e00fd-8a47-46da-aa39-8e26856f4816\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.891581 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-dispersionconf\") pod \"0a4e00fd-8a47-46da-aa39-8e26856f4816\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.892244 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a4e00fd-8a47-46da-aa39-8e26856f4816-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0a4e00fd-8a47-46da-aa39-8e26856f4816" (UID: "0a4e00fd-8a47-46da-aa39-8e26856f4816"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.892449 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0a4e00fd-8a47-46da-aa39-8e26856f4816-ring-data-devices\") pod \"0a4e00fd-8a47-46da-aa39-8e26856f4816\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.892687 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a4e00fd-8a47-46da-aa39-8e26856f4816-scripts\") pod \"0a4e00fd-8a47-46da-aa39-8e26856f4816\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.892952 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-combined-ca-bundle\") pod \"0a4e00fd-8a47-46da-aa39-8e26856f4816\" (UID: \"0a4e00fd-8a47-46da-aa39-8e26856f4816\") " Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.893036 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a4e00fd-8a47-46da-aa39-8e26856f4816-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "0a4e00fd-8a47-46da-aa39-8e26856f4816" (UID: "0a4e00fd-8a47-46da-aa39-8e26856f4816"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.893389 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a4e00fd-8a47-46da-aa39-8e26856f4816-scripts" (OuterVolumeSpecName: "scripts") pod "0a4e00fd-8a47-46da-aa39-8e26856f4816" (UID: "0a4e00fd-8a47-46da-aa39-8e26856f4816"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.894841 4792 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0a4e00fd-8a47-46da-aa39-8e26856f4816-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.894892 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a4e00fd-8a47-46da-aa39-8e26856f4816-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.894917 4792 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0a4e00fd-8a47-46da-aa39-8e26856f4816-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.895223 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "0a4e00fd-8a47-46da-aa39-8e26856f4816" (UID: "0a4e00fd-8a47-46da-aa39-8e26856f4816"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.895856 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a4e00fd-8a47-46da-aa39-8e26856f4816-kube-api-access-492f9" (OuterVolumeSpecName: "kube-api-access-492f9") pod "0a4e00fd-8a47-46da-aa39-8e26856f4816" (UID: "0a4e00fd-8a47-46da-aa39-8e26856f4816"). InnerVolumeSpecName "kube-api-access-492f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.898769 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a4e00fd-8a47-46da-aa39-8e26856f4816" (UID: "0a4e00fd-8a47-46da-aa39-8e26856f4816"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.899879 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "0a4e00fd-8a47-46da-aa39-8e26856f4816" (UID: "0a4e00fd-8a47-46da-aa39-8e26856f4816"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.996733 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.997026 4792 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.997040 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-492f9\" (UniqueName: \"kubernetes.io/projected/0a4e00fd-8a47-46da-aa39-8e26856f4816-kube-api-access-492f9\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:48 crc kubenswrapper[4792]: I1127 17:29:48.997056 4792 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0a4e00fd-8a47-46da-aa39-8e26856f4816-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:49 crc kubenswrapper[4792]: I1127 17:29:49.793621 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-s5klr" Nov 27 17:29:49 crc kubenswrapper[4792]: I1127 17:29:49.859363 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-s5klr"] Nov 27 17:29:49 crc kubenswrapper[4792]: I1127 17:29:49.870468 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-s5klr"] Nov 27 17:29:50 crc kubenswrapper[4792]: I1127 17:29:50.705338 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a4e00fd-8a47-46da-aa39-8e26856f4816" path="/var/lib/kubelet/pods/0a4e00fd-8a47-46da-aa39-8e26856f4816/volumes" Nov 27 17:29:50 crc kubenswrapper[4792]: I1127 17:29:50.827329 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:50 crc kubenswrapper[4792]: I1127 17:29:50.828068 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:52 crc kubenswrapper[4792]: I1127 17:29:52.163088 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:29:52 crc kubenswrapper[4792]: E1127 17:29:52.163300 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 27 17:29:52 crc kubenswrapper[4792]: E1127 17:29:52.163442 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 27 17:29:52 crc kubenswrapper[4792]: E1127 17:29:52.163501 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift podName:b7d63be6-0f2b-4b86-abec-4576d23792a9 nodeName:}" failed. No retries permitted until 2025-11-27 17:30:00.163484314 +0000 UTC m=+1222.506310632 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift") pod "swift-storage-0" (UID: "b7d63be6-0f2b-4b86-abec-4576d23792a9") : configmap "swift-ring-files" not found Nov 27 17:29:52 crc kubenswrapper[4792]: I1127 17:29:52.823736 4792 generic.go:334] "Generic (PLEG): container finished" podID="2af964c3-1de4-48af-a89c-df58527be8cb" containerID="e6fbace0c614bc0bdf93c06b148404d8b6e1595e04e37259615b4e291b32bc89" exitCode=0 Nov 27 17:29:52 crc kubenswrapper[4792]: I1127 17:29:52.823791 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2af964c3-1de4-48af-a89c-df58527be8cb","Type":"ContainerDied","Data":"e6fbace0c614bc0bdf93c06b148404d8b6e1595e04e37259615b4e291b32bc89"} Nov 27 17:29:52 crc kubenswrapper[4792]: I1127 17:29:52.825892 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 17:29:53 crc kubenswrapper[4792]: I1127 17:29:53.093994 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 27 17:29:55 crc kubenswrapper[4792]: E1127 17:29:55.442680 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Nov 27 17:29:55 crc kubenswrapper[4792]: E1127 17:29:55.443441 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n644h54bh79hcbhf9h645h8chcfh58fh657h5f8h5fh549h597h8fhb4h685h79h674h59fhfdh677h65hf8h58fh9bh67dh64h5bdh68h654hfq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvh5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:29:55 crc kubenswrapper[4792]: E1127 17:29:55.444679 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5" Nov 27 17:29:55 crc kubenswrapper[4792]: E1127 17:29:55.784998 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Nov 27 17:29:55 crc kubenswrapper[4792]: E1127 17:29:55.785428 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n578h55fh689h665h65ch5bbh97h56ch5c4h664h57h5fhbbh58fhb6h7fh5bdh54fh8dhdch599h686h554h595h78h5bh78h586hf5h597h59ch699q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zzxnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(3b7e5347-cd23-498f-ac14-95ce8f106b97): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:29:55 crc kubenswrapper[4792]: E1127 17:29:55.786810 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="3b7e5347-cd23-498f-ac14-95ce8f106b97" Nov 27 17:29:55 crc kubenswrapper[4792]: I1127 17:29:55.877101 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:55 crc kubenswrapper[4792]: E1127 17:29:55.904966 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="3b7e5347-cd23-498f-ac14-95ce8f106b97" Nov 27 17:29:55 crc kubenswrapper[4792]: E1127 17:29:55.915769 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5" Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.026552 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.236756 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.431831 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.436349 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5pwzl"] Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.446663 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-2dtqz"] Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.486854 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.659194 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-z7zwq"] Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.755207 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b64n6"] Nov 27 17:29:56 crc kubenswrapper[4792]: W1127 17:29:56.757534 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87ce06aa_1c03_4047_b4c8_36a610c07218.slice/crio-d719f1eb2d101c499de5cc47298aa67f81e4d09b2cdde9612607898a2eaae2aa WatchSource:0}: Error finding container d719f1eb2d101c499de5cc47298aa67f81e4d09b2cdde9612607898a2eaae2aa: Status 404 returned error can't find the container with id d719f1eb2d101c499de5cc47298aa67f81e4d09b2cdde9612607898a2eaae2aa Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.774424 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2n56v"] Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.872201 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" event={"ID":"673c6f91-6293-4630-a292-cf2a94e2e483","Type":"ContainerStarted","Data":"55bbc260bf19f93d73f9d15817397ce483bd80ddab1dded31f960d299e78dedc"} Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.875680 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8ed6358b-2030-436d-a847-724a53f802ea","Type":"ContainerStarted","Data":"4ea2a2639d35b78c8adeba90a76a4caf4b146739d876d1651ebd0cbded933d3a"} Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.881834 4792 generic.go:334] "Generic (PLEG): container finished" podID="8be51e3d-884a-49bc-a51e-98d57ad245df" containerID="3bf4aed7a3aac1347ffd815214f924cf16d2604d39ff8e204895a434e117f8ba" exitCode=0 Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.881908 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-znctf" event={"ID":"8be51e3d-884a-49bc-a51e-98d57ad245df","Type":"ContainerDied","Data":"3bf4aed7a3aac1347ffd815214f924cf16d2604d39ff8e204895a434e117f8ba"} Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.892409 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" event={"ID":"87ce06aa-1c03-4047-b4c8-36a610c07218","Type":"ContainerStarted","Data":"d719f1eb2d101c499de5cc47298aa67f81e4d09b2cdde9612607898a2eaae2aa"} Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.898185 4792 generic.go:334] "Generic (PLEG): container finished" podID="4bf7c13a-da9d-47c9-af54-bd0e91aa659f" containerID="dfaf996bf9584eaf7fb4637bc3462bf71e6e76fcac0e607ca41737d278c93334" exitCode=0 Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.898259 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" event={"ID":"4bf7c13a-da9d-47c9-af54-bd0e91aa659f","Type":"ContainerDied","Data":"dfaf996bf9584eaf7fb4637bc3462bf71e6e76fcac0e607ca41737d278c93334"} Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.898283 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" event={"ID":"4bf7c13a-da9d-47c9-af54-bd0e91aa659f","Type":"ContainerStarted","Data":"a22b76abf4159495f5040d645ce7207edf37682ea72183bef0e9a7e26bce75d2"} Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.903638 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-z7zwq" event={"ID":"88e47e4b-d7fb-4dfc-8352-9705403282a6","Type":"ContainerStarted","Data":"314bf99b8615604b92e28f4ef730987b7055e03991086e3adf7af04aae54739d"} Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.904481 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:56 crc kubenswrapper[4792]: E1127 17:29:56.933419 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5" Nov 27 17:29:56 crc kubenswrapper[4792]: E1127 17:29:56.934183 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="3b7e5347-cd23-498f-ac14-95ce8f106b97" Nov 27 17:29:56 crc kubenswrapper[4792]: I1127 17:29:56.972863 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.311869 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-znctf" Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.485240 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8be51e3d-884a-49bc-a51e-98d57ad245df-dns-svc\") pod \"8be51e3d-884a-49bc-a51e-98d57ad245df\" (UID: \"8be51e3d-884a-49bc-a51e-98d57ad245df\") " Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.485514 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48q7f\" (UniqueName: \"kubernetes.io/projected/8be51e3d-884a-49bc-a51e-98d57ad245df-kube-api-access-48q7f\") pod \"8be51e3d-884a-49bc-a51e-98d57ad245df\" (UID: \"8be51e3d-884a-49bc-a51e-98d57ad245df\") " Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.485537 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be51e3d-884a-49bc-a51e-98d57ad245df-config\") pod \"8be51e3d-884a-49bc-a51e-98d57ad245df\" (UID: \"8be51e3d-884a-49bc-a51e-98d57ad245df\") " Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.491574 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8be51e3d-884a-49bc-a51e-98d57ad245df-kube-api-access-48q7f" (OuterVolumeSpecName: "kube-api-access-48q7f") pod "8be51e3d-884a-49bc-a51e-98d57ad245df" (UID: "8be51e3d-884a-49bc-a51e-98d57ad245df"). InnerVolumeSpecName "kube-api-access-48q7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.506300 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be51e3d-884a-49bc-a51e-98d57ad245df-config" (OuterVolumeSpecName: "config") pod "8be51e3d-884a-49bc-a51e-98d57ad245df" (UID: "8be51e3d-884a-49bc-a51e-98d57ad245df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.506789 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be51e3d-884a-49bc-a51e-98d57ad245df-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8be51e3d-884a-49bc-a51e-98d57ad245df" (UID: "8be51e3d-884a-49bc-a51e-98d57ad245df"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.587965 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8be51e3d-884a-49bc-a51e-98d57ad245df-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.588007 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48q7f\" (UniqueName: \"kubernetes.io/projected/8be51e3d-884a-49bc-a51e-98d57ad245df-kube-api-access-48q7f\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.588023 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be51e3d-884a-49bc-a51e-98d57ad245df-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.935452 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2n56v" event={"ID":"66a7953b-06d4-453f-801c-4873d0d43c7a","Type":"ContainerStarted","Data":"a3f1cb4cb770ceea51ef39cba2dab115690ec4b51e5af6f34df2b50c92bb3a61"} Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.938268 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-znctf" event={"ID":"8be51e3d-884a-49bc-a51e-98d57ad245df","Type":"ContainerDied","Data":"15b1dd4b3f3c01bfabec35378cf54ef03904fc106110a830cb25209b582b724c"} Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.938328 4792 scope.go:117] "RemoveContainer" containerID="3bf4aed7a3aac1347ffd815214f924cf16d2604d39ff8e204895a434e117f8ba" Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.938457 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-znctf" Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.954299 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27d6022e-eea3-41e9-b880-620328dc5d78","Type":"ContainerStarted","Data":"8fffed7f25cc826dced9350fe59c2b2b5794322a9e9024a84355b647529d07bd"} Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.959581 4792 generic.go:334] "Generic (PLEG): container finished" podID="87ce06aa-1c03-4047-b4c8-36a610c07218" containerID="89c516bdd0966bd3ddedaa93adfaa68c3bfef62cea2038525aad1adf00e90a9b" exitCode=0 Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.959630 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" event={"ID":"87ce06aa-1c03-4047-b4c8-36a610c07218","Type":"ContainerDied","Data":"89c516bdd0966bd3ddedaa93adfaa68c3bfef62cea2038525aad1adf00e90a9b"} Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.982450 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbbf8d9a-2069-4544-92db-ad5174339775","Type":"ContainerStarted","Data":"7b0b51a3568b0257fd59b44b84fcb2226603c94271cc99981af94095c140c28e"} Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.985897 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" event={"ID":"4bf7c13a-da9d-47c9-af54-bd0e91aa659f","Type":"ContainerStarted","Data":"062340560c12c6509165ca449343a0495ac786c7a34d97f491298d484a575777"} Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.986086 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.993681 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-z7zwq" event={"ID":"88e47e4b-d7fb-4dfc-8352-9705403282a6","Type":"ContainerStarted","Data":"0764f25fe8b9d858539fc48d0f4f624d352e3700e75788479f1dd45c9bfb828d"} Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.998285 4792 generic.go:334] "Generic (PLEG): container finished" podID="673c6f91-6293-4630-a292-cf2a94e2e483" containerID="1b17e2aef7a4c8a18e35e9f7c4bbf5576bcb2bbe8690175a32fa5c9ea0d9dd95" exitCode=0 Nov 27 17:29:57 crc kubenswrapper[4792]: I1127 17:29:57.998752 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" event={"ID":"673c6f91-6293-4630-a292-cf2a94e2e483","Type":"ContainerDied","Data":"1b17e2aef7a4c8a18e35e9f7c4bbf5576bcb2bbe8690175a32fa5c9ea0d9dd95"} Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.035066 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" podStartSLOduration=13.035046309 podStartE2EDuration="13.035046309s" podCreationTimestamp="2025-11-27 17:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:29:58.023214005 +0000 UTC m=+1220.366040333" watchObservedRunningTime="2025-11-27 17:29:58.035046309 +0000 UTC m=+1220.377872627" Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.132821 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-znctf"] Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.146958 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-znctf"] Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.160384 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-z7zwq" podStartSLOduration=13.535226229 podStartE2EDuration="14.160357121s" podCreationTimestamp="2025-11-27 17:29:44 +0000 UTC" firstStartedPulling="2025-11-27 17:29:56.655583036 +0000 UTC m=+1218.998409354" lastFinishedPulling="2025-11-27 17:29:57.280713928 +0000 UTC m=+1219.623540246" observedRunningTime="2025-11-27 17:29:58.1061301 +0000 UTC m=+1220.448956418" watchObservedRunningTime="2025-11-27 17:29:58.160357121 +0000 UTC m=+1220.503183449" Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.235820 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.300339 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.506921 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.605379 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/673c6f91-6293-4630-a292-cf2a94e2e483-dns-svc\") pod \"673c6f91-6293-4630-a292-cf2a94e2e483\" (UID: \"673c6f91-6293-4630-a292-cf2a94e2e483\") " Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.605896 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drp8k\" (UniqueName: \"kubernetes.io/projected/673c6f91-6293-4630-a292-cf2a94e2e483-kube-api-access-drp8k\") pod \"673c6f91-6293-4630-a292-cf2a94e2e483\" (UID: \"673c6f91-6293-4630-a292-cf2a94e2e483\") " Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.605934 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/673c6f91-6293-4630-a292-cf2a94e2e483-config\") pod \"673c6f91-6293-4630-a292-cf2a94e2e483\" (UID: \"673c6f91-6293-4630-a292-cf2a94e2e483\") " Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.622854 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/673c6f91-6293-4630-a292-cf2a94e2e483-kube-api-access-drp8k" (OuterVolumeSpecName: "kube-api-access-drp8k") pod "673c6f91-6293-4630-a292-cf2a94e2e483" (UID: "673c6f91-6293-4630-a292-cf2a94e2e483"). InnerVolumeSpecName "kube-api-access-drp8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.639491 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/673c6f91-6293-4630-a292-cf2a94e2e483-config" (OuterVolumeSpecName: "config") pod "673c6f91-6293-4630-a292-cf2a94e2e483" (UID: "673c6f91-6293-4630-a292-cf2a94e2e483"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.643539 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/673c6f91-6293-4630-a292-cf2a94e2e483-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "673c6f91-6293-4630-a292-cf2a94e2e483" (UID: "673c6f91-6293-4630-a292-cf2a94e2e483"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.698578 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8be51e3d-884a-49bc-a51e-98d57ad245df" path="/var/lib/kubelet/pods/8be51e3d-884a-49bc-a51e-98d57ad245df/volumes" Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.707692 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drp8k\" (UniqueName: \"kubernetes.io/projected/673c6f91-6293-4630-a292-cf2a94e2e483-kube-api-access-drp8k\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.707891 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/673c6f91-6293-4630-a292-cf2a94e2e483-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:58 crc kubenswrapper[4792]: I1127 17:29:58.707946 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/673c6f91-6293-4630-a292-cf2a94e2e483-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.009887 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5","Type":"ContainerStarted","Data":"b7db640916d6319bc513f852fe02a10ae4fcbb3333ecb2dfabb4398bcd79f097"} Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.012360 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" event={"ID":"673c6f91-6293-4630-a292-cf2a94e2e483","Type":"ContainerDied","Data":"55bbc260bf19f93d73f9d15817397ce483bd80ddab1dded31f960d299e78dedc"} Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.012386 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-2dtqz" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.012423 4792 scope.go:117] "RemoveContainer" containerID="1b17e2aef7a4c8a18e35e9f7c4bbf5576bcb2bbe8690175a32fa5c9ea0d9dd95" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.019147 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3b7e5347-cd23-498f-ac14-95ce8f106b97","Type":"ContainerStarted","Data":"27a4d5df73d2777fce1501488dbc0ed5ccc0ed3525b66d220b4001c2b6d19b17"} Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.038230 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" event={"ID":"87ce06aa-1c03-4047-b4c8-36a610c07218","Type":"ContainerStarted","Data":"cda4f09799cb5135e660556608faaa5e9549a1c1cf2c3901c1ad74f191b38a9e"} Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.038278 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.043955 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=33.47883002 podStartE2EDuration="40.04391894s" podCreationTimestamp="2025-11-27 17:29:19 +0000 UTC" firstStartedPulling="2025-11-27 17:29:33.958062335 +0000 UTC m=+1196.300888653" lastFinishedPulling="2025-11-27 17:29:40.523151255 +0000 UTC m=+1202.865977573" observedRunningTime="2025-11-27 17:29:59.030556597 +0000 UTC m=+1221.373382945" watchObservedRunningTime="2025-11-27 17:29:59.04391894 +0000 UTC m=+1221.386745298" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.072162 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.096792 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-2dtqz"] Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.107233 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-2dtqz"] Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.125707 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=39.545810509 podStartE2EDuration="45.125688557s" podCreationTimestamp="2025-11-27 17:29:14 +0000 UTC" firstStartedPulling="2025-11-27 17:29:34.944325024 +0000 UTC m=+1197.287151342" lastFinishedPulling="2025-11-27 17:29:40.524203072 +0000 UTC m=+1202.867029390" observedRunningTime="2025-11-27 17:29:59.08845914 +0000 UTC m=+1221.431285458" watchObservedRunningTime="2025-11-27 17:29:59.125688557 +0000 UTC m=+1221.468514865" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.160972 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" podStartSLOduration=14.160950106 podStartE2EDuration="14.160950106s" podCreationTimestamp="2025-11-27 17:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:29:59.145569732 +0000 UTC m=+1221.488396050" watchObservedRunningTime="2025-11-27 17:29:59.160950106 +0000 UTC m=+1221.503776424" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.261974 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 27 17:29:59 crc kubenswrapper[4792]: E1127 17:29:59.262465 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be51e3d-884a-49bc-a51e-98d57ad245df" containerName="init" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.262490 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be51e3d-884a-49bc-a51e-98d57ad245df" containerName="init" Nov 27 17:29:59 crc kubenswrapper[4792]: E1127 17:29:59.262547 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="673c6f91-6293-4630-a292-cf2a94e2e483" containerName="init" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.262556 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="673c6f91-6293-4630-a292-cf2a94e2e483" containerName="init" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.262811 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="673c6f91-6293-4630-a292-cf2a94e2e483" containerName="init" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.262843 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be51e3d-884a-49bc-a51e-98d57ad245df" containerName="init" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.264276 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.269192 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.269361 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.270059 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-8bmsj" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.270265 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.273862 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.341607 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/446f7473-cc3f-42b6-931c-eb1747df2c73-scripts\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.341688 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/446f7473-cc3f-42b6-931c-eb1747df2c73-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.341741 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dpvp\" (UniqueName: \"kubernetes.io/projected/446f7473-cc3f-42b6-931c-eb1747df2c73-kube-api-access-8dpvp\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.341840 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446f7473-cc3f-42b6-931c-eb1747df2c73-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.341894 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446f7473-cc3f-42b6-931c-eb1747df2c73-config\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.341953 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/446f7473-cc3f-42b6-931c-eb1747df2c73-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.342044 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/446f7473-cc3f-42b6-931c-eb1747df2c73-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.446295 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/446f7473-cc3f-42b6-931c-eb1747df2c73-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.447168 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dpvp\" (UniqueName: \"kubernetes.io/projected/446f7473-cc3f-42b6-931c-eb1747df2c73-kube-api-access-8dpvp\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.447362 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446f7473-cc3f-42b6-931c-eb1747df2c73-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.447423 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/446f7473-cc3f-42b6-931c-eb1747df2c73-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.447531 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446f7473-cc3f-42b6-931c-eb1747df2c73-config\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.447829 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/446f7473-cc3f-42b6-931c-eb1747df2c73-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.447930 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/446f7473-cc3f-42b6-931c-eb1747df2c73-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.448007 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/446f7473-cc3f-42b6-931c-eb1747df2c73-scripts\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.448626 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/446f7473-cc3f-42b6-931c-eb1747df2c73-config\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.449378 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/446f7473-cc3f-42b6-931c-eb1747df2c73-scripts\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.452400 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446f7473-cc3f-42b6-931c-eb1747df2c73-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.453925 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/446f7473-cc3f-42b6-931c-eb1747df2c73-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.462904 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dpvp\" (UniqueName: \"kubernetes.io/projected/446f7473-cc3f-42b6-931c-eb1747df2c73-kube-api-access-8dpvp\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.472037 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/446f7473-cc3f-42b6-931c-eb1747df2c73-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"446f7473-cc3f-42b6-931c-eb1747df2c73\") " pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.586531 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 27 17:29:59 crc kubenswrapper[4792]: I1127 17:29:59.699673 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-549b454695-9zzgx" podUID="547350b1-93f3-451b-9c39-905a201a4af4" containerName="console" containerID="cri-o://a76c57692388131dbdb6105cf19be36865dca8b8e1c4b79e670ee8e8f064cf6f" gracePeriod=15 Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.047091 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-549b454695-9zzgx_547350b1-93f3-451b-9c39-905a201a4af4/console/0.log" Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.047461 4792 generic.go:334] "Generic (PLEG): container finished" podID="547350b1-93f3-451b-9c39-905a201a4af4" containerID="a76c57692388131dbdb6105cf19be36865dca8b8e1c4b79e670ee8e8f064cf6f" exitCode=2 Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.047599 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-549b454695-9zzgx" event={"ID":"547350b1-93f3-451b-9c39-905a201a4af4","Type":"ContainerDied","Data":"a76c57692388131dbdb6105cf19be36865dca8b8e1c4b79e670ee8e8f064cf6f"} Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.135499 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56"] Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.137158 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.139838 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.140003 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.165928 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56"] Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.282810 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxppw\" (UniqueName: \"kubernetes.io/projected/3f657f1d-3b19-4447-8d11-3525019b515b-kube-api-access-kxppw\") pod \"collect-profiles-29404410-dxc56\" (UID: \"3f657f1d-3b19-4447-8d11-3525019b515b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.282952 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.282985 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f657f1d-3b19-4447-8d11-3525019b515b-secret-volume\") pod \"collect-profiles-29404410-dxc56\" (UID: \"3f657f1d-3b19-4447-8d11-3525019b515b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.283046 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f657f1d-3b19-4447-8d11-3525019b515b-config-volume\") pod \"collect-profiles-29404410-dxc56\" (UID: \"3f657f1d-3b19-4447-8d11-3525019b515b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" Nov 27 17:30:00 crc kubenswrapper[4792]: E1127 17:30:00.283698 4792 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 27 17:30:00 crc kubenswrapper[4792]: E1127 17:30:00.283721 4792 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 27 17:30:00 crc kubenswrapper[4792]: E1127 17:30:00.283763 4792 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift podName:b7d63be6-0f2b-4b86-abec-4576d23792a9 nodeName:}" failed. No retries permitted until 2025-11-27 17:30:16.283746996 +0000 UTC m=+1238.626573424 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift") pod "swift-storage-0" (UID: "b7d63be6-0f2b-4b86-abec-4576d23792a9") : configmap "swift-ring-files" not found Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.384721 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxppw\" (UniqueName: \"kubernetes.io/projected/3f657f1d-3b19-4447-8d11-3525019b515b-kube-api-access-kxppw\") pod \"collect-profiles-29404410-dxc56\" (UID: \"3f657f1d-3b19-4447-8d11-3525019b515b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.384804 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f657f1d-3b19-4447-8d11-3525019b515b-secret-volume\") pod \"collect-profiles-29404410-dxc56\" (UID: \"3f657f1d-3b19-4447-8d11-3525019b515b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.384879 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f657f1d-3b19-4447-8d11-3525019b515b-config-volume\") pod \"collect-profiles-29404410-dxc56\" (UID: \"3f657f1d-3b19-4447-8d11-3525019b515b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.386014 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f657f1d-3b19-4447-8d11-3525019b515b-config-volume\") pod \"collect-profiles-29404410-dxc56\" (UID: \"3f657f1d-3b19-4447-8d11-3525019b515b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.396341 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f657f1d-3b19-4447-8d11-3525019b515b-secret-volume\") pod \"collect-profiles-29404410-dxc56\" (UID: \"3f657f1d-3b19-4447-8d11-3525019b515b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.404928 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxppw\" (UniqueName: \"kubernetes.io/projected/3f657f1d-3b19-4447-8d11-3525019b515b-kube-api-access-kxppw\") pod \"collect-profiles-29404410-dxc56\" (UID: \"3f657f1d-3b19-4447-8d11-3525019b515b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.463081 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" Nov 27 17:30:00 crc kubenswrapper[4792]: I1127 17:30:00.707011 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="673c6f91-6293-4630-a292-cf2a94e2e483" path="/var/lib/kubelet/pods/673c6f91-6293-4630-a292-cf2a94e2e483/volumes" Nov 27 17:30:01 crc kubenswrapper[4792]: I1127 17:30:01.067170 4792 generic.go:334] "Generic (PLEG): container finished" podID="8ed6358b-2030-436d-a847-724a53f802ea" containerID="4ea2a2639d35b78c8adeba90a76a4caf4b146739d876d1651ebd0cbded933d3a" exitCode=0 Nov 27 17:30:01 crc kubenswrapper[4792]: I1127 17:30:01.068503 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8ed6358b-2030-436d-a847-724a53f802ea","Type":"ContainerDied","Data":"4ea2a2639d35b78c8adeba90a76a4caf4b146739d876d1651ebd0cbded933d3a"} Nov 27 17:30:01 crc kubenswrapper[4792]: I1127 17:30:01.934365 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-549b454695-9zzgx_547350b1-93f3-451b-9c39-905a201a4af4/console/0.log" Nov 27 17:30:01 crc kubenswrapper[4792]: I1127 17:30:01.934429 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.077154 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-549b454695-9zzgx_547350b1-93f3-451b-9c39-905a201a4af4/console/0.log" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.077214 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-549b454695-9zzgx" event={"ID":"547350b1-93f3-451b-9c39-905a201a4af4","Type":"ContainerDied","Data":"6a633abfe9dd0549b28f17b10db5fbec7c27b33194b163560a95ca56cd62a430"} Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.077308 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-549b454695-9zzgx" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.116028 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-console-config\") pod \"547350b1-93f3-451b-9c39-905a201a4af4\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.116097 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/547350b1-93f3-451b-9c39-905a201a4af4-console-serving-cert\") pod \"547350b1-93f3-451b-9c39-905a201a4af4\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.116169 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/547350b1-93f3-451b-9c39-905a201a4af4-console-oauth-config\") pod \"547350b1-93f3-451b-9c39-905a201a4af4\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.116306 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-trusted-ca-bundle\") pod \"547350b1-93f3-451b-9c39-905a201a4af4\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.116349 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-service-ca\") pod \"547350b1-93f3-451b-9c39-905a201a4af4\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.116491 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgvsb\" (UniqueName: \"kubernetes.io/projected/547350b1-93f3-451b-9c39-905a201a4af4-kube-api-access-wgvsb\") pod \"547350b1-93f3-451b-9c39-905a201a4af4\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.116602 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-oauth-serving-cert\") pod \"547350b1-93f3-451b-9c39-905a201a4af4\" (UID: \"547350b1-93f3-451b-9c39-905a201a4af4\") " Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.117125 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-service-ca" (OuterVolumeSpecName: "service-ca") pod "547350b1-93f3-451b-9c39-905a201a4af4" (UID: "547350b1-93f3-451b-9c39-905a201a4af4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.117196 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-console-config" (OuterVolumeSpecName: "console-config") pod "547350b1-93f3-451b-9c39-905a201a4af4" (UID: "547350b1-93f3-451b-9c39-905a201a4af4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.117628 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "547350b1-93f3-451b-9c39-905a201a4af4" (UID: "547350b1-93f3-451b-9c39-905a201a4af4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.117676 4792 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-service-ca\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.117692 4792 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-console-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.117791 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "547350b1-93f3-451b-9c39-905a201a4af4" (UID: "547350b1-93f3-451b-9c39-905a201a4af4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.120524 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/547350b1-93f3-451b-9c39-905a201a4af4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "547350b1-93f3-451b-9c39-905a201a4af4" (UID: "547350b1-93f3-451b-9c39-905a201a4af4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.120780 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/547350b1-93f3-451b-9c39-905a201a4af4-kube-api-access-wgvsb" (OuterVolumeSpecName: "kube-api-access-wgvsb") pod "547350b1-93f3-451b-9c39-905a201a4af4" (UID: "547350b1-93f3-451b-9c39-905a201a4af4"). InnerVolumeSpecName "kube-api-access-wgvsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.132941 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/547350b1-93f3-451b-9c39-905a201a4af4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "547350b1-93f3-451b-9c39-905a201a4af4" (UID: "547350b1-93f3-451b-9c39-905a201a4af4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.219093 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgvsb\" (UniqueName: \"kubernetes.io/projected/547350b1-93f3-451b-9c39-905a201a4af4-kube-api-access-wgvsb\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.219127 4792 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.219142 4792 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/547350b1-93f3-451b-9c39-905a201a4af4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.219155 4792 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/547350b1-93f3-451b-9c39-905a201a4af4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.219166 4792 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/547350b1-93f3-451b-9c39-905a201a4af4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.418204 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-549b454695-9zzgx"] Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.429598 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-549b454695-9zzgx"] Nov 27 17:30:02 crc kubenswrapper[4792]: I1127 17:30:02.707159 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="547350b1-93f3-451b-9c39-905a201a4af4" path="/var/lib/kubelet/pods/547350b1-93f3-451b-9c39-905a201a4af4/volumes" Nov 27 17:30:05 crc kubenswrapper[4792]: I1127 17:30:05.396566 4792 scope.go:117] "RemoveContainer" containerID="a76c57692388131dbdb6105cf19be36865dca8b8e1c4b79e670ee8e8f064cf6f" Nov 27 17:30:05 crc kubenswrapper[4792]: I1127 17:30:05.526178 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:30:05 crc kubenswrapper[4792]: I1127 17:30:05.807868 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:30:05 crc kubenswrapper[4792]: I1127 17:30:05.893218 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5pwzl"] Nov 27 17:30:06 crc kubenswrapper[4792]: I1127 17:30:06.056349 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56"] Nov 27 17:30:06 crc kubenswrapper[4792]: I1127 17:30:06.125904 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2n56v" event={"ID":"66a7953b-06d4-453f-801c-4873d0d43c7a","Type":"ContainerStarted","Data":"8ef7b6ee897506286a41a7fce85ed6d3670de598eee93818dca389cc3429f102"} Nov 27 17:30:06 crc kubenswrapper[4792]: I1127 17:30:06.137598 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"8ed6358b-2030-436d-a847-724a53f802ea","Type":"ContainerStarted","Data":"2bf7d26c216a998029fbb151121a7c90fe0bb2ce8c9e8c58b5d7e9dd32438517"} Nov 27 17:30:06 crc kubenswrapper[4792]: I1127 17:30:06.161171 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2af964c3-1de4-48af-a89c-df58527be8cb","Type":"ContainerStarted","Data":"ba21a90d77c87ad08a6b33b4731b407e707768f950809fb138f41fe974418d89"} Nov 27 17:30:06 crc kubenswrapper[4792]: I1127 17:30:06.163821 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-2n56v" podStartSLOduration=9.56581146 podStartE2EDuration="18.163798541s" podCreationTimestamp="2025-11-27 17:29:48 +0000 UTC" firstStartedPulling="2025-11-27 17:29:56.891824571 +0000 UTC m=+1219.234650889" lastFinishedPulling="2025-11-27 17:30:05.489811652 +0000 UTC m=+1227.832637970" observedRunningTime="2025-11-27 17:30:06.152769136 +0000 UTC m=+1228.495595454" watchObservedRunningTime="2025-11-27 17:30:06.163798541 +0000 UTC m=+1228.506624859" Nov 27 17:30:06 crc kubenswrapper[4792]: I1127 17:30:06.184468 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" podUID="4bf7c13a-da9d-47c9-af54-bd0e91aa659f" containerName="dnsmasq-dns" containerID="cri-o://062340560c12c6509165ca449343a0495ac786c7a34d97f491298d484a575777" gracePeriod=10 Nov 27 17:30:06 crc kubenswrapper[4792]: I1127 17:30:06.184849 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" event={"ID":"3f657f1d-3b19-4447-8d11-3525019b515b","Type":"ContainerStarted","Data":"1be702149306a72dff49090ad7109d6fac4c2eed14df9c8ccf2c3d82a905d4a3"} Nov 27 17:30:06 crc kubenswrapper[4792]: I1127 17:30:06.241336 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 27 17:30:06 crc kubenswrapper[4792]: I1127 17:30:06.257827 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371977.596968 podStartE2EDuration="59.257808643s" podCreationTimestamp="2025-11-27 17:29:07 +0000 UTC" firstStartedPulling="2025-11-27 17:29:09.898469594 +0000 UTC m=+1172.241295912" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:30:06.1902305 +0000 UTC m=+1228.533056818" watchObservedRunningTime="2025-11-27 17:30:06.257808643 +0000 UTC m=+1228.600634961" Nov 27 17:30:06 crc kubenswrapper[4792]: I1127 17:30:06.836684 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.003605 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t28dt\" (UniqueName: \"kubernetes.io/projected/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-kube-api-access-t28dt\") pod \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.003775 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-ovsdbserver-nb\") pod \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.004603 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-dns-svc\") pod \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.004634 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-config\") pod \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\" (UID: \"4bf7c13a-da9d-47c9-af54-bd0e91aa659f\") " Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.011773 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-kube-api-access-t28dt" (OuterVolumeSpecName: "kube-api-access-t28dt") pod "4bf7c13a-da9d-47c9-af54-bd0e91aa659f" (UID: "4bf7c13a-da9d-47c9-af54-bd0e91aa659f"). InnerVolumeSpecName "kube-api-access-t28dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.085461 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4bf7c13a-da9d-47c9-af54-bd0e91aa659f" (UID: "4bf7c13a-da9d-47c9-af54-bd0e91aa659f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.101542 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-config" (OuterVolumeSpecName: "config") pod "4bf7c13a-da9d-47c9-af54-bd0e91aa659f" (UID: "4bf7c13a-da9d-47c9-af54-bd0e91aa659f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.106438 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t28dt\" (UniqueName: \"kubernetes.io/projected/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-kube-api-access-t28dt\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.106462 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.106471 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.130449 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4bf7c13a-da9d-47c9-af54-bd0e91aa659f" (UID: "4bf7c13a-da9d-47c9-af54-bd0e91aa659f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.196151 4792 generic.go:334] "Generic (PLEG): container finished" podID="3f657f1d-3b19-4447-8d11-3525019b515b" containerID="5420a4342779ba081e9b32a665893fd8815c358969d6a6e06c6595bd04bc1362" exitCode=0 Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.196219 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" event={"ID":"3f657f1d-3b19-4447-8d11-3525019b515b","Type":"ContainerDied","Data":"5420a4342779ba081e9b32a665893fd8815c358969d6a6e06c6595bd04bc1362"} Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.197751 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"446f7473-cc3f-42b6-931c-eb1747df2c73","Type":"ContainerStarted","Data":"047ca70d0eede9976c19069f5376e1c1158883c8ee38f3b8297d05c18b384fc7"} Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.200264 4792 generic.go:334] "Generic (PLEG): container finished" podID="4bf7c13a-da9d-47c9-af54-bd0e91aa659f" containerID="062340560c12c6509165ca449343a0495ac786c7a34d97f491298d484a575777" exitCode=0 Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.201055 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.203503 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" event={"ID":"4bf7c13a-da9d-47c9-af54-bd0e91aa659f","Type":"ContainerDied","Data":"062340560c12c6509165ca449343a0495ac786c7a34d97f491298d484a575777"} Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.203576 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-5pwzl" event={"ID":"4bf7c13a-da9d-47c9-af54-bd0e91aa659f","Type":"ContainerDied","Data":"a22b76abf4159495f5040d645ce7207edf37682ea72183bef0e9a7e26bce75d2"} Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.203594 4792 scope.go:117] "RemoveContainer" containerID="062340560c12c6509165ca449343a0495ac786c7a34d97f491298d484a575777" Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.208662 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bf7c13a-da9d-47c9-af54-bd0e91aa659f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.239051 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5pwzl"] Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.244573 4792 scope.go:117] "RemoveContainer" containerID="dfaf996bf9584eaf7fb4637bc3462bf71e6e76fcac0e607ca41737d278c93334" Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.250180 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-5pwzl"] Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.534727 4792 scope.go:117] "RemoveContainer" containerID="062340560c12c6509165ca449343a0495ac786c7a34d97f491298d484a575777" Nov 27 17:30:07 crc kubenswrapper[4792]: E1127 17:30:07.536195 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"062340560c12c6509165ca449343a0495ac786c7a34d97f491298d484a575777\": container with ID starting with 062340560c12c6509165ca449343a0495ac786c7a34d97f491298d484a575777 not found: ID does not exist" containerID="062340560c12c6509165ca449343a0495ac786c7a34d97f491298d484a575777" Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.536239 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"062340560c12c6509165ca449343a0495ac786c7a34d97f491298d484a575777"} err="failed to get container status \"062340560c12c6509165ca449343a0495ac786c7a34d97f491298d484a575777\": rpc error: code = NotFound desc = could not find container \"062340560c12c6509165ca449343a0495ac786c7a34d97f491298d484a575777\": container with ID starting with 062340560c12c6509165ca449343a0495ac786c7a34d97f491298d484a575777 not found: ID does not exist" Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.536267 4792 scope.go:117] "RemoveContainer" containerID="dfaf996bf9584eaf7fb4637bc3462bf71e6e76fcac0e607ca41737d278c93334" Nov 27 17:30:07 crc kubenswrapper[4792]: E1127 17:30:07.536485 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfaf996bf9584eaf7fb4637bc3462bf71e6e76fcac0e607ca41737d278c93334\": container with ID starting with dfaf996bf9584eaf7fb4637bc3462bf71e6e76fcac0e607ca41737d278c93334 not found: ID does not exist" containerID="dfaf996bf9584eaf7fb4637bc3462bf71e6e76fcac0e607ca41737d278c93334" Nov 27 17:30:07 crc kubenswrapper[4792]: I1127 17:30:07.536517 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfaf996bf9584eaf7fb4637bc3462bf71e6e76fcac0e607ca41737d278c93334"} err="failed to get container status \"dfaf996bf9584eaf7fb4637bc3462bf71e6e76fcac0e607ca41737d278c93334\": rpc error: code = NotFound desc = could not find container \"dfaf996bf9584eaf7fb4637bc3462bf71e6e76fcac0e607ca41737d278c93334\": container with ID starting with dfaf996bf9584eaf7fb4637bc3462bf71e6e76fcac0e607ca41737d278c93334 not found: ID does not exist" Nov 27 17:30:08 crc kubenswrapper[4792]: I1127 17:30:08.630105 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" Nov 27 17:30:08 crc kubenswrapper[4792]: I1127 17:30:08.705998 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf7c13a-da9d-47c9-af54-bd0e91aa659f" path="/var/lib/kubelet/pods/4bf7c13a-da9d-47c9-af54-bd0e91aa659f/volumes" Nov 27 17:30:08 crc kubenswrapper[4792]: I1127 17:30:08.738124 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f657f1d-3b19-4447-8d11-3525019b515b-config-volume\") pod \"3f657f1d-3b19-4447-8d11-3525019b515b\" (UID: \"3f657f1d-3b19-4447-8d11-3525019b515b\") " Nov 27 17:30:08 crc kubenswrapper[4792]: I1127 17:30:08.738961 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f657f1d-3b19-4447-8d11-3525019b515b-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f657f1d-3b19-4447-8d11-3525019b515b" (UID: "3f657f1d-3b19-4447-8d11-3525019b515b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:08 crc kubenswrapper[4792]: I1127 17:30:08.739014 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxppw\" (UniqueName: \"kubernetes.io/projected/3f657f1d-3b19-4447-8d11-3525019b515b-kube-api-access-kxppw\") pod \"3f657f1d-3b19-4447-8d11-3525019b515b\" (UID: \"3f657f1d-3b19-4447-8d11-3525019b515b\") " Nov 27 17:30:08 crc kubenswrapper[4792]: I1127 17:30:08.739068 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f657f1d-3b19-4447-8d11-3525019b515b-secret-volume\") pod \"3f657f1d-3b19-4447-8d11-3525019b515b\" (UID: \"3f657f1d-3b19-4447-8d11-3525019b515b\") " Nov 27 17:30:08 crc kubenswrapper[4792]: I1127 17:30:08.739588 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f657f1d-3b19-4447-8d11-3525019b515b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:08 crc kubenswrapper[4792]: I1127 17:30:08.747125 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f657f1d-3b19-4447-8d11-3525019b515b-kube-api-access-kxppw" (OuterVolumeSpecName: "kube-api-access-kxppw") pod "3f657f1d-3b19-4447-8d11-3525019b515b" (UID: "3f657f1d-3b19-4447-8d11-3525019b515b"). InnerVolumeSpecName "kube-api-access-kxppw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:08 crc kubenswrapper[4792]: I1127 17:30:08.768690 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f657f1d-3b19-4447-8d11-3525019b515b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f657f1d-3b19-4447-8d11-3525019b515b" (UID: "3f657f1d-3b19-4447-8d11-3525019b515b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:30:08 crc kubenswrapper[4792]: I1127 17:30:08.841476 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxppw\" (UniqueName: \"kubernetes.io/projected/3f657f1d-3b19-4447-8d11-3525019b515b-kube-api-access-kxppw\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:08 crc kubenswrapper[4792]: I1127 17:30:08.841520 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f657f1d-3b19-4447-8d11-3525019b515b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:09 crc kubenswrapper[4792]: I1127 17:30:09.101872 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 27 17:30:09 crc kubenswrapper[4792]: I1127 17:30:09.103048 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 27 17:30:09 crc kubenswrapper[4792]: I1127 17:30:09.228286 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2af964c3-1de4-48af-a89c-df58527be8cb","Type":"ContainerStarted","Data":"17c13d2c832bb867356d4dd0a1450f456458ecb987156a068dc943bd3b77152e"} Nov 27 17:30:09 crc kubenswrapper[4792]: I1127 17:30:09.230833 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" event={"ID":"3f657f1d-3b19-4447-8d11-3525019b515b","Type":"ContainerDied","Data":"1be702149306a72dff49090ad7109d6fac4c2eed14df9c8ccf2c3d82a905d4a3"} Nov 27 17:30:09 crc kubenswrapper[4792]: I1127 17:30:09.230893 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1be702149306a72dff49090ad7109d6fac4c2eed14df9c8ccf2c3d82a905d4a3" Nov 27 17:30:09 crc kubenswrapper[4792]: I1127 17:30:09.230975 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56" Nov 27 17:30:09 crc kubenswrapper[4792]: I1127 17:30:09.244931 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"446f7473-cc3f-42b6-931c-eb1747df2c73","Type":"ContainerStarted","Data":"4ff50a389a041bfa07a58950b4ea978c5887d2fbe14a42b59e86613356efdd15"} Nov 27 17:30:09 crc kubenswrapper[4792]: I1127 17:30:09.244977 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"446f7473-cc3f-42b6-931c-eb1747df2c73","Type":"ContainerStarted","Data":"a6e43821494ac3a8e760efe49ea3f664509a0a616a7f311a75090ff12050fa1e"} Nov 27 17:30:09 crc kubenswrapper[4792]: I1127 17:30:09.246137 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 27 17:30:09 crc kubenswrapper[4792]: I1127 17:30:09.279366 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=8.922008859 podStartE2EDuration="10.279344641s" podCreationTimestamp="2025-11-27 17:29:59 +0000 UTC" firstStartedPulling="2025-11-27 17:30:06.223798086 +0000 UTC m=+1228.566624404" lastFinishedPulling="2025-11-27 17:30:07.581133868 +0000 UTC m=+1229.923960186" observedRunningTime="2025-11-27 17:30:09.272368188 +0000 UTC m=+1231.615194506" watchObservedRunningTime="2025-11-27 17:30:09.279344641 +0000 UTC m=+1231.622170959" Nov 27 17:30:11 crc kubenswrapper[4792]: I1127 17:30:11.831302 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 27 17:30:11 crc kubenswrapper[4792]: I1127 17:30:11.936588 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 27 17:30:12 crc kubenswrapper[4792]: I1127 17:30:12.913582 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n2t2x" podUID="8445903e-bdf0-4581-a2ce-728410f878ac" containerName="ovn-controller" probeResult="failure" output=< Nov 27 17:30:12 crc kubenswrapper[4792]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 27 17:30:12 crc kubenswrapper[4792]: > Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.133782 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-pktkm"] Nov 27 17:30:13 crc kubenswrapper[4792]: E1127 17:30:13.134329 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f657f1d-3b19-4447-8d11-3525019b515b" containerName="collect-profiles" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.134351 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f657f1d-3b19-4447-8d11-3525019b515b" containerName="collect-profiles" Nov 27 17:30:13 crc kubenswrapper[4792]: E1127 17:30:13.134397 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="547350b1-93f3-451b-9c39-905a201a4af4" containerName="console" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.134406 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="547350b1-93f3-451b-9c39-905a201a4af4" containerName="console" Nov 27 17:30:13 crc kubenswrapper[4792]: E1127 17:30:13.134416 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf7c13a-da9d-47c9-af54-bd0e91aa659f" containerName="dnsmasq-dns" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.134426 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf7c13a-da9d-47c9-af54-bd0e91aa659f" containerName="dnsmasq-dns" Nov 27 17:30:13 crc kubenswrapper[4792]: E1127 17:30:13.134454 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf7c13a-da9d-47c9-af54-bd0e91aa659f" containerName="init" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.134462 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf7c13a-da9d-47c9-af54-bd0e91aa659f" containerName="init" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.134725 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f657f1d-3b19-4447-8d11-3525019b515b" containerName="collect-profiles" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.134752 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="547350b1-93f3-451b-9c39-905a201a4af4" containerName="console" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.134791 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf7c13a-da9d-47c9-af54-bd0e91aa659f" containerName="dnsmasq-dns" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.135725 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-pktkm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.148972 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-81fd-account-create-update-9tmsm"] Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.150578 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-81fd-account-create-update-9tmsm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.158608 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.161245 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-pktkm"] Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.173519 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-81fd-account-create-update-9tmsm"] Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.193584 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.206866 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dzgvb" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.243934 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb7k4\" (UniqueName: \"kubernetes.io/projected/f52348a4-2e60-4f12-a73b-c70e7134dc0f-kube-api-access-lb7k4\") pod \"mysqld-exporter-openstack-db-create-pktkm\" (UID: \"f52348a4-2e60-4f12-a73b-c70e7134dc0f\") " pod="openstack/mysqld-exporter-openstack-db-create-pktkm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.250733 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f52348a4-2e60-4f12-a73b-c70e7134dc0f-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-pktkm\" (UID: \"f52348a4-2e60-4f12-a73b-c70e7134dc0f\") " pod="openstack/mysqld-exporter-openstack-db-create-pktkm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.316978 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2af964c3-1de4-48af-a89c-df58527be8cb","Type":"ContainerStarted","Data":"a84d0689f8112f5fd3bd5625982eb05171182af80b34b0402de940b5ee57de28"} Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.343036 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=22.825243955 podStartE2EDuration="1m1.34300618s" podCreationTimestamp="2025-11-27 17:29:12 +0000 UTC" firstStartedPulling="2025-11-27 17:29:33.671388423 +0000 UTC m=+1196.014214741" lastFinishedPulling="2025-11-27 17:30:12.189150648 +0000 UTC m=+1234.531976966" observedRunningTime="2025-11-27 17:30:13.336918898 +0000 UTC m=+1235.679745216" watchObservedRunningTime="2025-11-27 17:30:13.34300618 +0000 UTC m=+1235.685832488" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.352556 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgrm5\" (UniqueName: \"kubernetes.io/projected/0c3345f6-9d76-4087-a0ad-037e8ee66a87-kube-api-access-sgrm5\") pod \"mysqld-exporter-81fd-account-create-update-9tmsm\" (UID: \"0c3345f6-9d76-4087-a0ad-037e8ee66a87\") " pod="openstack/mysqld-exporter-81fd-account-create-update-9tmsm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.352704 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb7k4\" (UniqueName: \"kubernetes.io/projected/f52348a4-2e60-4f12-a73b-c70e7134dc0f-kube-api-access-lb7k4\") pod \"mysqld-exporter-openstack-db-create-pktkm\" (UID: \"f52348a4-2e60-4f12-a73b-c70e7134dc0f\") " pod="openstack/mysqld-exporter-openstack-db-create-pktkm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.352773 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f52348a4-2e60-4f12-a73b-c70e7134dc0f-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-pktkm\" (UID: \"f52348a4-2e60-4f12-a73b-c70e7134dc0f\") " pod="openstack/mysqld-exporter-openstack-db-create-pktkm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.352831 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c3345f6-9d76-4087-a0ad-037e8ee66a87-operator-scripts\") pod \"mysqld-exporter-81fd-account-create-update-9tmsm\" (UID: \"0c3345f6-9d76-4087-a0ad-037e8ee66a87\") " pod="openstack/mysqld-exporter-81fd-account-create-update-9tmsm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.354239 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f52348a4-2e60-4f12-a73b-c70e7134dc0f-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-pktkm\" (UID: \"f52348a4-2e60-4f12-a73b-c70e7134dc0f\") " pod="openstack/mysqld-exporter-openstack-db-create-pktkm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.372695 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb7k4\" (UniqueName: \"kubernetes.io/projected/f52348a4-2e60-4f12-a73b-c70e7134dc0f-kube-api-access-lb7k4\") pod \"mysqld-exporter-openstack-db-create-pktkm\" (UID: \"f52348a4-2e60-4f12-a73b-c70e7134dc0f\") " pod="openstack/mysqld-exporter-openstack-db-create-pktkm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.423926 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n2t2x-config-97m5h"] Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.425538 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.432222 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.448171 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n2t2x-config-97m5h"] Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.454543 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c3345f6-9d76-4087-a0ad-037e8ee66a87-operator-scripts\") pod \"mysqld-exporter-81fd-account-create-update-9tmsm\" (UID: \"0c3345f6-9d76-4087-a0ad-037e8ee66a87\") " pod="openstack/mysqld-exporter-81fd-account-create-update-9tmsm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.454633 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgrm5\" (UniqueName: \"kubernetes.io/projected/0c3345f6-9d76-4087-a0ad-037e8ee66a87-kube-api-access-sgrm5\") pod \"mysqld-exporter-81fd-account-create-update-9tmsm\" (UID: \"0c3345f6-9d76-4087-a0ad-037e8ee66a87\") " pod="openstack/mysqld-exporter-81fd-account-create-update-9tmsm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.455585 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c3345f6-9d76-4087-a0ad-037e8ee66a87-operator-scripts\") pod \"mysqld-exporter-81fd-account-create-update-9tmsm\" (UID: \"0c3345f6-9d76-4087-a0ad-037e8ee66a87\") " pod="openstack/mysqld-exporter-81fd-account-create-update-9tmsm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.464836 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-pktkm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.490254 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgrm5\" (UniqueName: \"kubernetes.io/projected/0c3345f6-9d76-4087-a0ad-037e8ee66a87-kube-api-access-sgrm5\") pod \"mysqld-exporter-81fd-account-create-update-9tmsm\" (UID: \"0c3345f6-9d76-4087-a0ad-037e8ee66a87\") " pod="openstack/mysqld-exporter-81fd-account-create-update-9tmsm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.556482 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z46hv\" (UniqueName: \"kubernetes.io/projected/9a18e073-03cf-4861-bb78-a23e0a294749-kube-api-access-z46hv\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.556890 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a18e073-03cf-4861-bb78-a23e0a294749-scripts\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.556973 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-log-ovn\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.556999 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-run-ovn\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.557120 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a18e073-03cf-4861-bb78-a23e0a294749-additional-scripts\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.557152 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-run\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.658661 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a18e073-03cf-4861-bb78-a23e0a294749-additional-scripts\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.658732 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-run\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.658769 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z46hv\" (UniqueName: \"kubernetes.io/projected/9a18e073-03cf-4861-bb78-a23e0a294749-kube-api-access-z46hv\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.658848 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a18e073-03cf-4861-bb78-a23e0a294749-scripts\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.658950 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-log-ovn\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.658975 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-run-ovn\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.659115 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-run-ovn\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.659132 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-run\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.659388 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-log-ovn\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.659938 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a18e073-03cf-4861-bb78-a23e0a294749-additional-scripts\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.661942 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a18e073-03cf-4861-bb78-a23e0a294749-scripts\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.680510 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z46hv\" (UniqueName: \"kubernetes.io/projected/9a18e073-03cf-4861-bb78-a23e0a294749-kube-api-access-z46hv\") pod \"ovn-controller-n2t2x-config-97m5h\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.786600 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-81fd-account-create-update-9tmsm" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.868582 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:13 crc kubenswrapper[4792]: I1127 17:30:13.990977 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-pktkm"] Nov 27 17:30:13 crc kubenswrapper[4792]: W1127 17:30:13.999703 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf52348a4_2e60_4f12_a73b_c70e7134dc0f.slice/crio-ddec2a5853369b32403b8070ac8768e92e64d24291fdfe0d61637a1014c75790 WatchSource:0}: Error finding container ddec2a5853369b32403b8070ac8768e92e64d24291fdfe0d61637a1014c75790: Status 404 returned error can't find the container with id ddec2a5853369b32403b8070ac8768e92e64d24291fdfe0d61637a1014c75790 Nov 27 17:30:14 crc kubenswrapper[4792]: I1127 17:30:14.363760 4792 generic.go:334] "Generic (PLEG): container finished" podID="66a7953b-06d4-453f-801c-4873d0d43c7a" containerID="8ef7b6ee897506286a41a7fce85ed6d3670de598eee93818dca389cc3429f102" exitCode=0 Nov 27 17:30:14 crc kubenswrapper[4792]: I1127 17:30:14.363821 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2n56v" event={"ID":"66a7953b-06d4-453f-801c-4873d0d43c7a","Type":"ContainerDied","Data":"8ef7b6ee897506286a41a7fce85ed6d3670de598eee93818dca389cc3429f102"} Nov 27 17:30:14 crc kubenswrapper[4792]: I1127 17:30:14.367607 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-pktkm" event={"ID":"f52348a4-2e60-4f12-a73b-c70e7134dc0f","Type":"ContainerStarted","Data":"8eed29aeeab63d9ffe91006391cd3a0403cceb48782f6738e826eeeb51c0f3a4"} Nov 27 17:30:14 crc kubenswrapper[4792]: I1127 17:30:14.367755 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-pktkm" event={"ID":"f52348a4-2e60-4f12-a73b-c70e7134dc0f","Type":"ContainerStarted","Data":"ddec2a5853369b32403b8070ac8768e92e64d24291fdfe0d61637a1014c75790"} Nov 27 17:30:14 crc kubenswrapper[4792]: I1127 17:30:14.377126 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:14 crc kubenswrapper[4792]: I1127 17:30:14.377173 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:14 crc kubenswrapper[4792]: I1127 17:30:14.382700 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:14 crc kubenswrapper[4792]: I1127 17:30:14.418131 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-81fd-account-create-update-9tmsm"] Nov 27 17:30:14 crc kubenswrapper[4792]: I1127 17:30:14.430605 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n2t2x-config-97m5h"] Nov 27 17:30:14 crc kubenswrapper[4792]: W1127 17:30:14.431861 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c3345f6_9d76_4087_a0ad_037e8ee66a87.slice/crio-4b20374c294cdafdebde4afe99ee7efcd5c94d7b8d65b5d4cb774ccac739a753 WatchSource:0}: Error finding container 4b20374c294cdafdebde4afe99ee7efcd5c94d7b8d65b5d4cb774ccac739a753: Status 404 returned error can't find the container with id 4b20374c294cdafdebde4afe99ee7efcd5c94d7b8d65b5d4cb774ccac739a753 Nov 27 17:30:14 crc kubenswrapper[4792]: I1127 17:30:14.451247 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-pktkm" podStartSLOduration=1.451228977 podStartE2EDuration="1.451228977s" podCreationTimestamp="2025-11-27 17:30:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:30:14.447960226 +0000 UTC m=+1236.790786544" watchObservedRunningTime="2025-11-27 17:30:14.451228977 +0000 UTC m=+1236.794055295" Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.399023 4792 generic.go:334] "Generic (PLEG): container finished" podID="9a18e073-03cf-4861-bb78-a23e0a294749" containerID="25577c162a3055549c432017c68328aafaa545ebd2530ef5c7d10af5e2556c7c" exitCode=0 Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.399686 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n2t2x-config-97m5h" event={"ID":"9a18e073-03cf-4861-bb78-a23e0a294749","Type":"ContainerDied","Data":"25577c162a3055549c432017c68328aafaa545ebd2530ef5c7d10af5e2556c7c"} Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.399723 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n2t2x-config-97m5h" event={"ID":"9a18e073-03cf-4861-bb78-a23e0a294749","Type":"ContainerStarted","Data":"daf9d702524b242f4872b8685be6bcffe1e287241b242db45a44cf097ce3387c"} Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.452762 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-pktkm" event={"ID":"f52348a4-2e60-4f12-a73b-c70e7134dc0f","Type":"ContainerDied","Data":"8eed29aeeab63d9ffe91006391cd3a0403cceb48782f6738e826eeeb51c0f3a4"} Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.450824 4792 generic.go:334] "Generic (PLEG): container finished" podID="f52348a4-2e60-4f12-a73b-c70e7134dc0f" containerID="8eed29aeeab63d9ffe91006391cd3a0403cceb48782f6738e826eeeb51c0f3a4" exitCode=0 Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.519816 4792 generic.go:334] "Generic (PLEG): container finished" podID="0c3345f6-9d76-4087-a0ad-037e8ee66a87" containerID="ffa4434242fc4093842308e5cd53fa45f6b558607983153cafa04d111eb30606" exitCode=0 Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.520712 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-81fd-account-create-update-9tmsm" event={"ID":"0c3345f6-9d76-4087-a0ad-037e8ee66a87","Type":"ContainerDied","Data":"ffa4434242fc4093842308e5cd53fa45f6b558607983153cafa04d111eb30606"} Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.520747 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-81fd-account-create-update-9tmsm" event={"ID":"0c3345f6-9d76-4087-a0ad-037e8ee66a87","Type":"ContainerStarted","Data":"4b20374c294cdafdebde4afe99ee7efcd5c94d7b8d65b5d4cb774ccac739a753"} Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.541417 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.918872 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.953543 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-combined-ca-bundle\") pod \"66a7953b-06d4-453f-801c-4873d0d43c7a\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.953616 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-swiftconf\") pod \"66a7953b-06d4-453f-801c-4873d0d43c7a\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.953678 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a7953b-06d4-453f-801c-4873d0d43c7a-scripts\") pod \"66a7953b-06d4-453f-801c-4873d0d43c7a\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.953784 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-dispersionconf\") pod \"66a7953b-06d4-453f-801c-4873d0d43c7a\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.953831 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/66a7953b-06d4-453f-801c-4873d0d43c7a-etc-swift\") pod \"66a7953b-06d4-453f-801c-4873d0d43c7a\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.953949 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6pr5\" (UniqueName: \"kubernetes.io/projected/66a7953b-06d4-453f-801c-4873d0d43c7a-kube-api-access-v6pr5\") pod \"66a7953b-06d4-453f-801c-4873d0d43c7a\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.954013 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/66a7953b-06d4-453f-801c-4873d0d43c7a-ring-data-devices\") pod \"66a7953b-06d4-453f-801c-4873d0d43c7a\" (UID: \"66a7953b-06d4-453f-801c-4873d0d43c7a\") " Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.955235 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a7953b-06d4-453f-801c-4873d0d43c7a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "66a7953b-06d4-453f-801c-4873d0d43c7a" (UID: "66a7953b-06d4-453f-801c-4873d0d43c7a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.955544 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a7953b-06d4-453f-801c-4873d0d43c7a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "66a7953b-06d4-453f-801c-4873d0d43c7a" (UID: "66a7953b-06d4-453f-801c-4873d0d43c7a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.962440 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a7953b-06d4-453f-801c-4873d0d43c7a-kube-api-access-v6pr5" (OuterVolumeSpecName: "kube-api-access-v6pr5") pod "66a7953b-06d4-453f-801c-4873d0d43c7a" (UID: "66a7953b-06d4-453f-801c-4873d0d43c7a"). InnerVolumeSpecName "kube-api-access-v6pr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.964687 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "66a7953b-06d4-453f-801c-4873d0d43c7a" (UID: "66a7953b-06d4-453f-801c-4873d0d43c7a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:30:15 crc kubenswrapper[4792]: I1127 17:30:15.998087 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66a7953b-06d4-453f-801c-4873d0d43c7a" (UID: "66a7953b-06d4-453f-801c-4873d0d43c7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.002017 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a7953b-06d4-453f-801c-4873d0d43c7a-scripts" (OuterVolumeSpecName: "scripts") pod "66a7953b-06d4-453f-801c-4873d0d43c7a" (UID: "66a7953b-06d4-453f-801c-4873d0d43c7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.006854 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "66a7953b-06d4-453f-801c-4873d0d43c7a" (UID: "66a7953b-06d4-453f-801c-4873d0d43c7a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.057099 4792 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.057151 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66a7953b-06d4-453f-801c-4873d0d43c7a-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.057170 4792 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.057191 4792 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/66a7953b-06d4-453f-801c-4873d0d43c7a-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.057212 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6pr5\" (UniqueName: \"kubernetes.io/projected/66a7953b-06d4-453f-801c-4873d0d43c7a-kube-api-access-v6pr5\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.057231 4792 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/66a7953b-06d4-453f-801c-4873d0d43c7a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.057246 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66a7953b-06d4-453f-801c-4873d0d43c7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.361637 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.366019 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b7d63be6-0f2b-4b86-abec-4576d23792a9-etc-swift\") pod \"swift-storage-0\" (UID: \"b7d63be6-0f2b-4b86-abec-4576d23792a9\") " pod="openstack/swift-storage-0" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.528410 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2n56v" event={"ID":"66a7953b-06d4-453f-801c-4873d0d43c7a","Type":"ContainerDied","Data":"a3f1cb4cb770ceea51ef39cba2dab115690ec4b51e5af6f34df2b50c92bb3a61"} Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.528465 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3f1cb4cb770ceea51ef39cba2dab115690ec4b51e5af6f34df2b50c92bb3a61" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.528551 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2n56v" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.663820 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.925610 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.971767 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-run\") pod \"9a18e073-03cf-4861-bb78-a23e0a294749\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.971990 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-run-ovn\") pod \"9a18e073-03cf-4861-bb78-a23e0a294749\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.972050 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a18e073-03cf-4861-bb78-a23e0a294749-scripts\") pod \"9a18e073-03cf-4861-bb78-a23e0a294749\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.972086 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z46hv\" (UniqueName: \"kubernetes.io/projected/9a18e073-03cf-4861-bb78-a23e0a294749-kube-api-access-z46hv\") pod \"9a18e073-03cf-4861-bb78-a23e0a294749\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.972126 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-run" (OuterVolumeSpecName: "var-run") pod "9a18e073-03cf-4861-bb78-a23e0a294749" (UID: "9a18e073-03cf-4861-bb78-a23e0a294749"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.972177 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a18e073-03cf-4861-bb78-a23e0a294749-additional-scripts\") pod \"9a18e073-03cf-4861-bb78-a23e0a294749\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.972209 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-log-ovn\") pod \"9a18e073-03cf-4861-bb78-a23e0a294749\" (UID: \"9a18e073-03cf-4861-bb78-a23e0a294749\") " Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.972597 4792 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-run\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.972661 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9a18e073-03cf-4861-bb78-a23e0a294749" (UID: "9a18e073-03cf-4861-bb78-a23e0a294749"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.972692 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9a18e073-03cf-4861-bb78-a23e0a294749" (UID: "9a18e073-03cf-4861-bb78-a23e0a294749"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.973315 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a18e073-03cf-4861-bb78-a23e0a294749-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9a18e073-03cf-4861-bb78-a23e0a294749" (UID: "9a18e073-03cf-4861-bb78-a23e0a294749"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.973704 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a18e073-03cf-4861-bb78-a23e0a294749-scripts" (OuterVolumeSpecName: "scripts") pod "9a18e073-03cf-4861-bb78-a23e0a294749" (UID: "9a18e073-03cf-4861-bb78-a23e0a294749"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:16 crc kubenswrapper[4792]: I1127 17:30:16.976666 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a18e073-03cf-4861-bb78-a23e0a294749-kube-api-access-z46hv" (OuterVolumeSpecName: "kube-api-access-z46hv") pod "9a18e073-03cf-4861-bb78-a23e0a294749" (UID: "9a18e073-03cf-4861-bb78-a23e0a294749"). InnerVolumeSpecName "kube-api-access-z46hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.074380 4792 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.074713 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a18e073-03cf-4861-bb78-a23e0a294749-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.074725 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z46hv\" (UniqueName: \"kubernetes.io/projected/9a18e073-03cf-4861-bb78-a23e0a294749-kube-api-access-z46hv\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.074734 4792 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9a18e073-03cf-4861-bb78-a23e0a294749-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.074743 4792 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9a18e073-03cf-4861-bb78-a23e0a294749-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.108206 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-81fd-account-create-update-9tmsm" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.139443 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-pktkm" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.176291 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb7k4\" (UniqueName: \"kubernetes.io/projected/f52348a4-2e60-4f12-a73b-c70e7134dc0f-kube-api-access-lb7k4\") pod \"f52348a4-2e60-4f12-a73b-c70e7134dc0f\" (UID: \"f52348a4-2e60-4f12-a73b-c70e7134dc0f\") " Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.176336 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c3345f6-9d76-4087-a0ad-037e8ee66a87-operator-scripts\") pod \"0c3345f6-9d76-4087-a0ad-037e8ee66a87\" (UID: \"0c3345f6-9d76-4087-a0ad-037e8ee66a87\") " Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.176455 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgrm5\" (UniqueName: \"kubernetes.io/projected/0c3345f6-9d76-4087-a0ad-037e8ee66a87-kube-api-access-sgrm5\") pod \"0c3345f6-9d76-4087-a0ad-037e8ee66a87\" (UID: \"0c3345f6-9d76-4087-a0ad-037e8ee66a87\") " Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.176490 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f52348a4-2e60-4f12-a73b-c70e7134dc0f-operator-scripts\") pod \"f52348a4-2e60-4f12-a73b-c70e7134dc0f\" (UID: \"f52348a4-2e60-4f12-a73b-c70e7134dc0f\") " Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.178398 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3345f6-9d76-4087-a0ad-037e8ee66a87-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c3345f6-9d76-4087-a0ad-037e8ee66a87" (UID: "0c3345f6-9d76-4087-a0ad-037e8ee66a87"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.178426 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f52348a4-2e60-4f12-a73b-c70e7134dc0f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f52348a4-2e60-4f12-a73b-c70e7134dc0f" (UID: "f52348a4-2e60-4f12-a73b-c70e7134dc0f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.183137 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c3345f6-9d76-4087-a0ad-037e8ee66a87-kube-api-access-sgrm5" (OuterVolumeSpecName: "kube-api-access-sgrm5") pod "0c3345f6-9d76-4087-a0ad-037e8ee66a87" (UID: "0c3345f6-9d76-4087-a0ad-037e8ee66a87"). InnerVolumeSpecName "kube-api-access-sgrm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.183173 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f52348a4-2e60-4f12-a73b-c70e7134dc0f-kube-api-access-lb7k4" (OuterVolumeSpecName: "kube-api-access-lb7k4") pod "f52348a4-2e60-4f12-a73b-c70e7134dc0f" (UID: "f52348a4-2e60-4f12-a73b-c70e7134dc0f"). InnerVolumeSpecName "kube-api-access-lb7k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.279283 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f52348a4-2e60-4f12-a73b-c70e7134dc0f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.279318 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c3345f6-9d76-4087-a0ad-037e8ee66a87-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.279331 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lb7k4\" (UniqueName: \"kubernetes.io/projected/f52348a4-2e60-4f12-a73b-c70e7134dc0f-kube-api-access-lb7k4\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.279342 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgrm5\" (UniqueName: \"kubernetes.io/projected/0c3345f6-9d76-4087-a0ad-037e8ee66a87-kube-api-access-sgrm5\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.314655 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 27 17:30:17 crc kubenswrapper[4792]: W1127 17:30:17.315776 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7d63be6_0f2b_4b86_abec_4576d23792a9.slice/crio-63ce0da9451dd7efa94fce790c3401a036a92cb1e2d4701f21234a2235f2b07f WatchSource:0}: Error finding container 63ce0da9451dd7efa94fce790c3401a036a92cb1e2d4701f21234a2235f2b07f: Status 404 returned error can't find the container with id 63ce0da9451dd7efa94fce790c3401a036a92cb1e2d4701f21234a2235f2b07f Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.538025 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n2t2x-config-97m5h" event={"ID":"9a18e073-03cf-4861-bb78-a23e0a294749","Type":"ContainerDied","Data":"daf9d702524b242f4872b8685be6bcffe1e287241b242db45a44cf097ce3387c"} Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.538068 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n2t2x-config-97m5h" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.538073 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daf9d702524b242f4872b8685be6bcffe1e287241b242db45a44cf097ce3387c" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.540051 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-pktkm" event={"ID":"f52348a4-2e60-4f12-a73b-c70e7134dc0f","Type":"ContainerDied","Data":"ddec2a5853369b32403b8070ac8768e92e64d24291fdfe0d61637a1014c75790"} Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.540103 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddec2a5853369b32403b8070ac8768e92e64d24291fdfe0d61637a1014c75790" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.540071 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-pktkm" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.541210 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"63ce0da9451dd7efa94fce790c3401a036a92cb1e2d4701f21234a2235f2b07f"} Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.542469 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-81fd-account-create-update-9tmsm" event={"ID":"0c3345f6-9d76-4087-a0ad-037e8ee66a87","Type":"ContainerDied","Data":"4b20374c294cdafdebde4afe99ee7efcd5c94d7b8d65b5d4cb774ccac739a753"} Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.542498 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b20374c294cdafdebde4afe99ee7efcd5c94d7b8d65b5d4cb774ccac739a753" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.542505 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-81fd-account-create-update-9tmsm" Nov 27 17:30:17 crc kubenswrapper[4792]: I1127 17:30:17.909770 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-n2t2x" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.069268 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n2t2x-config-97m5h"] Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.077003 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-n2t2x-config-97m5h"] Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.394864 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f"] Nov 27 17:30:18 crc kubenswrapper[4792]: E1127 17:30:18.395284 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c3345f6-9d76-4087-a0ad-037e8ee66a87" containerName="mariadb-account-create-update" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.395298 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c3345f6-9d76-4087-a0ad-037e8ee66a87" containerName="mariadb-account-create-update" Nov 27 17:30:18 crc kubenswrapper[4792]: E1127 17:30:18.395349 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a7953b-06d4-453f-801c-4873d0d43c7a" containerName="swift-ring-rebalance" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.395358 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a7953b-06d4-453f-801c-4873d0d43c7a" containerName="swift-ring-rebalance" Nov 27 17:30:18 crc kubenswrapper[4792]: E1127 17:30:18.395376 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a18e073-03cf-4861-bb78-a23e0a294749" containerName="ovn-config" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.395383 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a18e073-03cf-4861-bb78-a23e0a294749" containerName="ovn-config" Nov 27 17:30:18 crc kubenswrapper[4792]: E1127 17:30:18.395394 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f52348a4-2e60-4f12-a73b-c70e7134dc0f" containerName="mariadb-database-create" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.395401 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f52348a4-2e60-4f12-a73b-c70e7134dc0f" containerName="mariadb-database-create" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.395628 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a18e073-03cf-4861-bb78-a23e0a294749" containerName="ovn-config" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.396825 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f52348a4-2e60-4f12-a73b-c70e7134dc0f" containerName="mariadb-database-create" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.396852 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a7953b-06d4-453f-801c-4873d0d43c7a" containerName="swift-ring-rebalance" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.396873 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c3345f6-9d76-4087-a0ad-037e8ee66a87" containerName="mariadb-account-create-update" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.397775 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.471365 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f"] Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.552807 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c410408-137e-4436-b401-2e9b2de55ac1-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-ctg6f\" (UID: \"9c410408-137e-4436-b401-2e9b2de55ac1\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.552860 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbjff\" (UniqueName: \"kubernetes.io/projected/9c410408-137e-4436-b401-2e9b2de55ac1-kube-api-access-tbjff\") pod \"mysqld-exporter-openstack-cell1-db-create-ctg6f\" (UID: \"9c410408-137e-4436-b401-2e9b2de55ac1\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.629478 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.630248 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" containerName="prometheus" containerID="cri-o://ba21a90d77c87ad08a6b33b4731b407e707768f950809fb138f41fe974418d89" gracePeriod=600 Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.630340 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" containerName="config-reloader" containerID="cri-o://17c13d2c832bb867356d4dd0a1450f456458ecb987156a068dc943bd3b77152e" gracePeriod=600 Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.630330 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" containerName="thanos-sidecar" containerID="cri-o://a84d0689f8112f5fd3bd5625982eb05171182af80b34b0402de940b5ee57de28" gracePeriod=600 Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.654823 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c410408-137e-4436-b401-2e9b2de55ac1-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-ctg6f\" (UID: \"9c410408-137e-4436-b401-2e9b2de55ac1\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.654884 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbjff\" (UniqueName: \"kubernetes.io/projected/9c410408-137e-4436-b401-2e9b2de55ac1-kube-api-access-tbjff\") pod \"mysqld-exporter-openstack-cell1-db-create-ctg6f\" (UID: \"9c410408-137e-4436-b401-2e9b2de55ac1\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.655675 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c410408-137e-4436-b401-2e9b2de55ac1-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-ctg6f\" (UID: \"9c410408-137e-4436-b401-2e9b2de55ac1\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.682315 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbjff\" (UniqueName: \"kubernetes.io/projected/9c410408-137e-4436-b401-2e9b2de55ac1-kube-api-access-tbjff\") pod \"mysqld-exporter-openstack-cell1-db-create-ctg6f\" (UID: \"9c410408-137e-4436-b401-2e9b2de55ac1\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.700529 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a18e073-03cf-4861-bb78-a23e0a294749" path="/var/lib/kubelet/pods/9a18e073-03cf-4861-bb78-a23e0a294749/volumes" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.701209 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-b4b8-account-create-update-xdc22"] Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.702452 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b4b8-account-create-update-xdc22" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.703845 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.707858 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-b4b8-account-create-update-xdc22"] Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.769810 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.858002 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxnqt\" (UniqueName: \"kubernetes.io/projected/de1907ef-3b87-4899-a1f7-75096bb1b94e-kube-api-access-gxnqt\") pod \"mysqld-exporter-b4b8-account-create-update-xdc22\" (UID: \"de1907ef-3b87-4899-a1f7-75096bb1b94e\") " pod="openstack/mysqld-exporter-b4b8-account-create-update-xdc22" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.858373 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de1907ef-3b87-4899-a1f7-75096bb1b94e-operator-scripts\") pod \"mysqld-exporter-b4b8-account-create-update-xdc22\" (UID: \"de1907ef-3b87-4899-a1f7-75096bb1b94e\") " pod="openstack/mysqld-exporter-b4b8-account-create-update-xdc22" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.961037 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxnqt\" (UniqueName: \"kubernetes.io/projected/de1907ef-3b87-4899-a1f7-75096bb1b94e-kube-api-access-gxnqt\") pod \"mysqld-exporter-b4b8-account-create-update-xdc22\" (UID: \"de1907ef-3b87-4899-a1f7-75096bb1b94e\") " pod="openstack/mysqld-exporter-b4b8-account-create-update-xdc22" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.962485 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de1907ef-3b87-4899-a1f7-75096bb1b94e-operator-scripts\") pod \"mysqld-exporter-b4b8-account-create-update-xdc22\" (UID: \"de1907ef-3b87-4899-a1f7-75096bb1b94e\") " pod="openstack/mysqld-exporter-b4b8-account-create-update-xdc22" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.963332 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de1907ef-3b87-4899-a1f7-75096bb1b94e-operator-scripts\") pod \"mysqld-exporter-b4b8-account-create-update-xdc22\" (UID: \"de1907ef-3b87-4899-a1f7-75096bb1b94e\") " pod="openstack/mysqld-exporter-b4b8-account-create-update-xdc22" Nov 27 17:30:18 crc kubenswrapper[4792]: I1127 17:30:18.992056 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxnqt\" (UniqueName: \"kubernetes.io/projected/de1907ef-3b87-4899-a1f7-75096bb1b94e-kube-api-access-gxnqt\") pod \"mysqld-exporter-b4b8-account-create-update-xdc22\" (UID: \"de1907ef-3b87-4899-a1f7-75096bb1b94e\") " pod="openstack/mysqld-exporter-b4b8-account-create-update-xdc22" Nov 27 17:30:19 crc kubenswrapper[4792]: I1127 17:30:19.079141 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b4b8-account-create-update-xdc22" Nov 27 17:30:19 crc kubenswrapper[4792]: I1127 17:30:19.282474 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f"] Nov 27 17:30:19 crc kubenswrapper[4792]: W1127 17:30:19.295480 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c410408_137e_4436_b401_2e9b2de55ac1.slice/crio-5b9d78e530e1bae7e27dbe1b2f0a7c1146b47a35f448abb654b27324479083ac WatchSource:0}: Error finding container 5b9d78e530e1bae7e27dbe1b2f0a7c1146b47a35f448abb654b27324479083ac: Status 404 returned error can't find the container with id 5b9d78e530e1bae7e27dbe1b2f0a7c1146b47a35f448abb654b27324479083ac Nov 27 17:30:19 crc kubenswrapper[4792]: I1127 17:30:19.403175 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" containerName="prometheus" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 27 17:30:19 crc kubenswrapper[4792]: I1127 17:30:19.577294 4792 generic.go:334] "Generic (PLEG): container finished" podID="2af964c3-1de4-48af-a89c-df58527be8cb" containerID="a84d0689f8112f5fd3bd5625982eb05171182af80b34b0402de940b5ee57de28" exitCode=0 Nov 27 17:30:19 crc kubenswrapper[4792]: I1127 17:30:19.577565 4792 generic.go:334] "Generic (PLEG): container finished" podID="2af964c3-1de4-48af-a89c-df58527be8cb" containerID="17c13d2c832bb867356d4dd0a1450f456458ecb987156a068dc943bd3b77152e" exitCode=0 Nov 27 17:30:19 crc kubenswrapper[4792]: I1127 17:30:19.577576 4792 generic.go:334] "Generic (PLEG): container finished" podID="2af964c3-1de4-48af-a89c-df58527be8cb" containerID="ba21a90d77c87ad08a6b33b4731b407e707768f950809fb138f41fe974418d89" exitCode=0 Nov 27 17:30:19 crc kubenswrapper[4792]: I1127 17:30:19.577622 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2af964c3-1de4-48af-a89c-df58527be8cb","Type":"ContainerDied","Data":"a84d0689f8112f5fd3bd5625982eb05171182af80b34b0402de940b5ee57de28"} Nov 27 17:30:19 crc kubenswrapper[4792]: I1127 17:30:19.577671 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2af964c3-1de4-48af-a89c-df58527be8cb","Type":"ContainerDied","Data":"17c13d2c832bb867356d4dd0a1450f456458ecb987156a068dc943bd3b77152e"} Nov 27 17:30:19 crc kubenswrapper[4792]: I1127 17:30:19.577684 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2af964c3-1de4-48af-a89c-df58527be8cb","Type":"ContainerDied","Data":"ba21a90d77c87ad08a6b33b4731b407e707768f950809fb138f41fe974418d89"} Nov 27 17:30:19 crc kubenswrapper[4792]: I1127 17:30:19.579538 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f" event={"ID":"9c410408-137e-4436-b401-2e9b2de55ac1","Type":"ContainerStarted","Data":"5db4d07c1cda5ba33a1263a82a2d3bd02028a8939cb0caeb7c1c50ec4d89817f"} Nov 27 17:30:19 crc kubenswrapper[4792]: I1127 17:30:19.579564 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f" event={"ID":"9c410408-137e-4436-b401-2e9b2de55ac1","Type":"ContainerStarted","Data":"5b9d78e530e1bae7e27dbe1b2f0a7c1146b47a35f448abb654b27324479083ac"} Nov 27 17:30:19 crc kubenswrapper[4792]: I1127 17:30:19.604983 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f" podStartSLOduration=1.6049608800000001 podStartE2EDuration="1.60496088s" podCreationTimestamp="2025-11-27 17:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:30:19.601275619 +0000 UTC m=+1241.944101947" watchObservedRunningTime="2025-11-27 17:30:19.60496088 +0000 UTC m=+1241.947787198" Nov 27 17:30:19 crc kubenswrapper[4792]: I1127 17:30:19.703148 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 27 17:30:19 crc kubenswrapper[4792]: I1127 17:30:19.851229 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-b4b8-account-create-update-xdc22"] Nov 27 17:30:19 crc kubenswrapper[4792]: W1127 17:30:19.882738 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde1907ef_3b87_4899_a1f7_75096bb1b94e.slice/crio-e68ad7618528694891da426be5075aa812a57689828fba9378195a7c30d44251 WatchSource:0}: Error finding container e68ad7618528694891da426be5075aa812a57689828fba9378195a7c30d44251: Status 404 returned error can't find the container with id e68ad7618528694891da426be5075aa812a57689828fba9378195a7c30d44251 Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.539587 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.630042 4792 generic.go:334] "Generic (PLEG): container finished" podID="de1907ef-3b87-4899-a1f7-75096bb1b94e" containerID="293e7cf97954891e39541ec865cf261cb95c36675cf85bbaad3c609f93cf323e" exitCode=0 Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.630111 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b4b8-account-create-update-xdc22" event={"ID":"de1907ef-3b87-4899-a1f7-75096bb1b94e","Type":"ContainerDied","Data":"293e7cf97954891e39541ec865cf261cb95c36675cf85bbaad3c609f93cf323e"} Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.630143 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b4b8-account-create-update-xdc22" event={"ID":"de1907ef-3b87-4899-a1f7-75096bb1b94e","Type":"ContainerStarted","Data":"e68ad7618528694891da426be5075aa812a57689828fba9378195a7c30d44251"} Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.633113 4792 generic.go:334] "Generic (PLEG): container finished" podID="9c410408-137e-4436-b401-2e9b2de55ac1" containerID="5db4d07c1cda5ba33a1263a82a2d3bd02028a8939cb0caeb7c1c50ec4d89817f" exitCode=0 Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.633177 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f" event={"ID":"9c410408-137e-4436-b401-2e9b2de55ac1","Type":"ContainerDied","Data":"5db4d07c1cda5ba33a1263a82a2d3bd02028a8939cb0caeb7c1c50ec4d89817f"} Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.639380 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2af964c3-1de4-48af-a89c-df58527be8cb","Type":"ContainerDied","Data":"0bb19d748aa06724e43f18b4535e522dd934beda1f393988028662634d95e4f5"} Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.641872 4792 scope.go:117] "RemoveContainer" containerID="a84d0689f8112f5fd3bd5625982eb05171182af80b34b0402de940b5ee57de28" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.640318 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.698217 4792 scope.go:117] "RemoveContainer" containerID="17c13d2c832bb867356d4dd0a1450f456458ecb987156a068dc943bd3b77152e" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.711662 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2af964c3-1de4-48af-a89c-df58527be8cb-tls-assets\") pod \"2af964c3-1de4-48af-a89c-df58527be8cb\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.711723 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9clp\" (UniqueName: \"kubernetes.io/projected/2af964c3-1de4-48af-a89c-df58527be8cb-kube-api-access-g9clp\") pod \"2af964c3-1de4-48af-a89c-df58527be8cb\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.711771 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2af964c3-1de4-48af-a89c-df58527be8cb-config-out\") pod \"2af964c3-1de4-48af-a89c-df58527be8cb\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.711812 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-thanos-prometheus-http-client-file\") pod \"2af964c3-1de4-48af-a89c-df58527be8cb\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.711982 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d876c2f3-479b-491c-9733-a774bc11004d\") pod \"2af964c3-1de4-48af-a89c-df58527be8cb\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.712050 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-web-config\") pod \"2af964c3-1de4-48af-a89c-df58527be8cb\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.712128 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2af964c3-1de4-48af-a89c-df58527be8cb-prometheus-metric-storage-rulefiles-0\") pod \"2af964c3-1de4-48af-a89c-df58527be8cb\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.712171 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-config\") pod \"2af964c3-1de4-48af-a89c-df58527be8cb\" (UID: \"2af964c3-1de4-48af-a89c-df58527be8cb\") " Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.713181 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2af964c3-1de4-48af-a89c-df58527be8cb-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "2af964c3-1de4-48af-a89c-df58527be8cb" (UID: "2af964c3-1de4-48af-a89c-df58527be8cb"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.713444 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-j9j8r"] Nov 27 17:30:20 crc kubenswrapper[4792]: E1127 17:30:20.713860 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" containerName="thanos-sidecar" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.713880 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" containerName="thanos-sidecar" Nov 27 17:30:20 crc kubenswrapper[4792]: E1127 17:30:20.713905 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" containerName="init-config-reloader" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.713912 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" containerName="init-config-reloader" Nov 27 17:30:20 crc kubenswrapper[4792]: E1127 17:30:20.713928 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" containerName="prometheus" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.713934 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" containerName="prometheus" Nov 27 17:30:20 crc kubenswrapper[4792]: E1127 17:30:20.713953 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" containerName="config-reloader" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.713958 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" containerName="config-reloader" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.714133 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" containerName="config-reloader" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.714151 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" containerName="thanos-sidecar" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.714167 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" containerName="prometheus" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.717109 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2af964c3-1de4-48af-a89c-df58527be8cb-config-out" (OuterVolumeSpecName: "config-out") pod "2af964c3-1de4-48af-a89c-df58527be8cb" (UID: "2af964c3-1de4-48af-a89c-df58527be8cb"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.717368 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af964c3-1de4-48af-a89c-df58527be8cb-kube-api-access-g9clp" (OuterVolumeSpecName: "kube-api-access-g9clp") pod "2af964c3-1de4-48af-a89c-df58527be8cb" (UID: "2af964c3-1de4-48af-a89c-df58527be8cb"). InnerVolumeSpecName "kube-api-access-g9clp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.717808 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "2af964c3-1de4-48af-a89c-df58527be8cb" (UID: "2af964c3-1de4-48af-a89c-df58527be8cb"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.718464 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-j9j8r" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.719584 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-config" (OuterVolumeSpecName: "config") pod "2af964c3-1de4-48af-a89c-df58527be8cb" (UID: "2af964c3-1de4-48af-a89c-df58527be8cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.719704 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af964c3-1de4-48af-a89c-df58527be8cb-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "2af964c3-1de4-48af-a89c-df58527be8cb" (UID: "2af964c3-1de4-48af-a89c-df58527be8cb"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.720740 4792 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2af964c3-1de4-48af-a89c-df58527be8cb-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.720778 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.720789 4792 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2af964c3-1de4-48af-a89c-df58527be8cb-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.720800 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9clp\" (UniqueName: \"kubernetes.io/projected/2af964c3-1de4-48af-a89c-df58527be8cb-kube-api-access-g9clp\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.720810 4792 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2af964c3-1de4-48af-a89c-df58527be8cb-config-out\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.720819 4792 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.721515 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3ac4-account-create-update-46n8p"] Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.730297 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ac4-account-create-update-46n8p" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.732787 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.745404 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-j9j8r"] Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.751552 4792 scope.go:117] "RemoveContainer" containerID="ba21a90d77c87ad08a6b33b4731b407e707768f950809fb138f41fe974418d89" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.751854 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-web-config" (OuterVolumeSpecName: "web-config") pod "2af964c3-1de4-48af-a89c-df58527be8cb" (UID: "2af964c3-1de4-48af-a89c-df58527be8cb"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.782028 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d876c2f3-479b-491c-9733-a774bc11004d" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "2af964c3-1de4-48af-a89c-df58527be8cb" (UID: "2af964c3-1de4-48af-a89c-df58527be8cb"). InnerVolumeSpecName "pvc-d876c2f3-479b-491c-9733-a774bc11004d". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.782122 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3ac4-account-create-update-46n8p"] Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.785502 4792 scope.go:117] "RemoveContainer" containerID="e6fbace0c614bc0bdf93c06b148404d8b6e1595e04e37259615b4e291b32bc89" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.822803 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90c5f747-a058-4b9d-bc56-54a80f7a980c-operator-scripts\") pod \"keystone-db-create-j9j8r\" (UID: \"90c5f747-a058-4b9d-bc56-54a80f7a980c\") " pod="openstack/keystone-db-create-j9j8r" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.822999 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9557q\" (UniqueName: \"kubernetes.io/projected/90c5f747-a058-4b9d-bc56-54a80f7a980c-kube-api-access-9557q\") pod \"keystone-db-create-j9j8r\" (UID: \"90c5f747-a058-4b9d-bc56-54a80f7a980c\") " pod="openstack/keystone-db-create-j9j8r" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.823186 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d876c2f3-479b-491c-9733-a774bc11004d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d876c2f3-479b-491c-9733-a774bc11004d\") on node \"crc\" " Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.823220 4792 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2af964c3-1de4-48af-a89c-df58527be8cb-web-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.844465 4792 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.844627 4792 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d876c2f3-479b-491c-9733-a774bc11004d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d876c2f3-479b-491c-9733-a774bc11004d") on node "crc" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.917222 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-r24r2"] Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.918684 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r24r2" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.924703 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90c5f747-a058-4b9d-bc56-54a80f7a980c-operator-scripts\") pod \"keystone-db-create-j9j8r\" (UID: \"90c5f747-a058-4b9d-bc56-54a80f7a980c\") " pod="openstack/keystone-db-create-j9j8r" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.924768 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgrnl\" (UniqueName: \"kubernetes.io/projected/2bda37ed-63e8-4cdc-98c9-50025ead6629-kube-api-access-lgrnl\") pod \"keystone-3ac4-account-create-update-46n8p\" (UID: \"2bda37ed-63e8-4cdc-98c9-50025ead6629\") " pod="openstack/keystone-3ac4-account-create-update-46n8p" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.924820 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9557q\" (UniqueName: \"kubernetes.io/projected/90c5f747-a058-4b9d-bc56-54a80f7a980c-kube-api-access-9557q\") pod \"keystone-db-create-j9j8r\" (UID: \"90c5f747-a058-4b9d-bc56-54a80f7a980c\") " pod="openstack/keystone-db-create-j9j8r" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.924983 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bda37ed-63e8-4cdc-98c9-50025ead6629-operator-scripts\") pod \"keystone-3ac4-account-create-update-46n8p\" (UID: \"2bda37ed-63e8-4cdc-98c9-50025ead6629\") " pod="openstack/keystone-3ac4-account-create-update-46n8p" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.925187 4792 reconciler_common.go:293] "Volume detached for volume \"pvc-d876c2f3-479b-491c-9733-a774bc11004d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d876c2f3-479b-491c-9733-a774bc11004d\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.925544 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90c5f747-a058-4b9d-bc56-54a80f7a980c-operator-scripts\") pod \"keystone-db-create-j9j8r\" (UID: \"90c5f747-a058-4b9d-bc56-54a80f7a980c\") " pod="openstack/keystone-db-create-j9j8r" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.926884 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-r24r2"] Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.944455 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9557q\" (UniqueName: \"kubernetes.io/projected/90c5f747-a058-4b9d-bc56-54a80f7a980c-kube-api-access-9557q\") pod \"keystone-db-create-j9j8r\" (UID: \"90c5f747-a058-4b9d-bc56-54a80f7a980c\") " pod="openstack/keystone-db-create-j9j8r" Nov 27 17:30:20 crc kubenswrapper[4792]: I1127 17:30:20.996686 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8b76-account-create-update-wb5pq"] Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.000903 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8b76-account-create-update-wb5pq" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.003929 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.027996 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bda37ed-63e8-4cdc-98c9-50025ead6629-operator-scripts\") pod \"keystone-3ac4-account-create-update-46n8p\" (UID: \"2bda37ed-63e8-4cdc-98c9-50025ead6629\") " pod="openstack/keystone-3ac4-account-create-update-46n8p" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.028139 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgrnl\" (UniqueName: \"kubernetes.io/projected/2bda37ed-63e8-4cdc-98c9-50025ead6629-kube-api-access-lgrnl\") pod \"keystone-3ac4-account-create-update-46n8p\" (UID: \"2bda37ed-63e8-4cdc-98c9-50025ead6629\") " pod="openstack/keystone-3ac4-account-create-update-46n8p" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.028251 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c49a20e-fc3b-4a3a-8a88-a300599bc1d6-operator-scripts\") pod \"placement-db-create-r24r2\" (UID: \"8c49a20e-fc3b-4a3a-8a88-a300599bc1d6\") " pod="openstack/placement-db-create-r24r2" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.028298 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r76z\" (UniqueName: \"kubernetes.io/projected/8c49a20e-fc3b-4a3a-8a88-a300599bc1d6-kube-api-access-2r76z\") pod \"placement-db-create-r24r2\" (UID: \"8c49a20e-fc3b-4a3a-8a88-a300599bc1d6\") " pod="openstack/placement-db-create-r24r2" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.029097 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bda37ed-63e8-4cdc-98c9-50025ead6629-operator-scripts\") pod \"keystone-3ac4-account-create-update-46n8p\" (UID: \"2bda37ed-63e8-4cdc-98c9-50025ead6629\") " pod="openstack/keystone-3ac4-account-create-update-46n8p" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.035715 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8b76-account-create-update-wb5pq"] Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.046001 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.054483 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgrnl\" (UniqueName: \"kubernetes.io/projected/2bda37ed-63e8-4cdc-98c9-50025ead6629-kube-api-access-lgrnl\") pod \"keystone-3ac4-account-create-update-46n8p\" (UID: \"2bda37ed-63e8-4cdc-98c9-50025ead6629\") " pod="openstack/keystone-3ac4-account-create-update-46n8p" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.054486 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.064549 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-j9j8r" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.084059 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ac4-account-create-update-46n8p" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.091398 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.093887 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.096522 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jcr5s" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.098182 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.098532 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.098707 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.100845 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.102551 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.108663 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.111201 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.130269 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c49a20e-fc3b-4a3a-8a88-a300599bc1d6-operator-scripts\") pod \"placement-db-create-r24r2\" (UID: \"8c49a20e-fc3b-4a3a-8a88-a300599bc1d6\") " pod="openstack/placement-db-create-r24r2" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.130338 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r76z\" (UniqueName: \"kubernetes.io/projected/8c49a20e-fc3b-4a3a-8a88-a300599bc1d6-kube-api-access-2r76z\") pod \"placement-db-create-r24r2\" (UID: \"8c49a20e-fc3b-4a3a-8a88-a300599bc1d6\") " pod="openstack/placement-db-create-r24r2" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.130385 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0033423-6c17-45fe-8356-e655c26af92b-operator-scripts\") pod \"placement-8b76-account-create-update-wb5pq\" (UID: \"f0033423-6c17-45fe-8356-e655c26af92b\") " pod="openstack/placement-8b76-account-create-update-wb5pq" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.130417 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szpbb\" (UniqueName: \"kubernetes.io/projected/f0033423-6c17-45fe-8356-e655c26af92b-kube-api-access-szpbb\") pod \"placement-8b76-account-create-update-wb5pq\" (UID: \"f0033423-6c17-45fe-8356-e655c26af92b\") " pod="openstack/placement-8b76-account-create-update-wb5pq" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.131805 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c49a20e-fc3b-4a3a-8a88-a300599bc1d6-operator-scripts\") pod \"placement-db-create-r24r2\" (UID: \"8c49a20e-fc3b-4a3a-8a88-a300599bc1d6\") " pod="openstack/placement-db-create-r24r2" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.147796 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r76z\" (UniqueName: \"kubernetes.io/projected/8c49a20e-fc3b-4a3a-8a88-a300599bc1d6-kube-api-access-2r76z\") pod \"placement-db-create-r24r2\" (UID: \"8c49a20e-fc3b-4a3a-8a88-a300599bc1d6\") " pod="openstack/placement-db-create-r24r2" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.231721 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4ad7f090-9b35-4a85-86d5-1763f234a768-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.231774 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4ad7f090-9b35-4a85-86d5-1763f234a768-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.231801 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.231824 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.231881 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt8cm\" (UniqueName: \"kubernetes.io/projected/4ad7f090-9b35-4a85-86d5-1763f234a768-kube-api-access-bt8cm\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.231911 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0033423-6c17-45fe-8356-e655c26af92b-operator-scripts\") pod \"placement-8b76-account-create-update-wb5pq\" (UID: \"f0033423-6c17-45fe-8356-e655c26af92b\") " pod="openstack/placement-8b76-account-create-update-wb5pq" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.231943 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szpbb\" (UniqueName: \"kubernetes.io/projected/f0033423-6c17-45fe-8356-e655c26af92b-kube-api-access-szpbb\") pod \"placement-8b76-account-create-update-wb5pq\" (UID: \"f0033423-6c17-45fe-8356-e655c26af92b\") " pod="openstack/placement-8b76-account-create-update-wb5pq" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.231966 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.231989 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-config\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.232011 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.232036 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4ad7f090-9b35-4a85-86d5-1763f234a768-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.232059 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.232102 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d876c2f3-479b-491c-9733-a774bc11004d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d876c2f3-479b-491c-9733-a774bc11004d\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.233242 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0033423-6c17-45fe-8356-e655c26af92b-operator-scripts\") pod \"placement-8b76-account-create-update-wb5pq\" (UID: \"f0033423-6c17-45fe-8356-e655c26af92b\") " pod="openstack/placement-8b76-account-create-update-wb5pq" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.273972 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szpbb\" (UniqueName: \"kubernetes.io/projected/f0033423-6c17-45fe-8356-e655c26af92b-kube-api-access-szpbb\") pod \"placement-8b76-account-create-update-wb5pq\" (UID: \"f0033423-6c17-45fe-8356-e655c26af92b\") " pod="openstack/placement-8b76-account-create-update-wb5pq" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.295681 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r24r2" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.325172 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8b76-account-create-update-wb5pq" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.333689 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4ad7f090-9b35-4a85-86d5-1763f234a768-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.334937 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.334866 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4ad7f090-9b35-4a85-86d5-1763f234a768-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.335034 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d876c2f3-479b-491c-9733-a774bc11004d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d876c2f3-479b-491c-9733-a774bc11004d\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.335543 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4ad7f090-9b35-4a85-86d5-1763f234a768-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.335572 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4ad7f090-9b35-4a85-86d5-1763f234a768-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.335601 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.335628 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.335711 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt8cm\" (UniqueName: \"kubernetes.io/projected/4ad7f090-9b35-4a85-86d5-1763f234a768-kube-api-access-bt8cm\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.335745 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.335792 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-config\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.335817 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.350123 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4ad7f090-9b35-4a85-86d5-1763f234a768-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.350920 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.352719 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.353885 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.356219 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-config\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.357340 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4ad7f090-9b35-4a85-86d5-1763f234a768-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.357849 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.393286 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4ad7f090-9b35-4a85-86d5-1763f234a768-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.408418 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt8cm\" (UniqueName: \"kubernetes.io/projected/4ad7f090-9b35-4a85-86d5-1763f234a768-kube-api-access-bt8cm\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.456757 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-csvr8"] Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.459784 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-csvr8" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.479638 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-csvr8"] Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.508405 4792 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.508479 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d876c2f3-479b-491c-9733-a774bc11004d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d876c2f3-479b-491c-9733-a774bc11004d\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a4ae8a256b777150ec36228df485705416cca05b1e9dfc7938ea8781e1038e97/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.534700 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6878-account-create-update-w2x65"] Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.535967 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6878-account-create-update-w2x65" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.541106 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-299dt\" (UniqueName: \"kubernetes.io/projected/5603bece-14a8-4764-8a38-000aa3ea0199-kube-api-access-299dt\") pod \"glance-db-create-csvr8\" (UID: \"5603bece-14a8-4764-8a38-000aa3ea0199\") " pod="openstack/glance-db-create-csvr8" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.541175 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5603bece-14a8-4764-8a38-000aa3ea0199-operator-scripts\") pod \"glance-db-create-csvr8\" (UID: \"5603bece-14a8-4764-8a38-000aa3ea0199\") " pod="openstack/glance-db-create-csvr8" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.548023 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.597763 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6878-account-create-update-w2x65"] Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.642461 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6903227-0287-490a-b643-b9eecab193dd-operator-scripts\") pod \"glance-6878-account-create-update-w2x65\" (UID: \"c6903227-0287-490a-b643-b9eecab193dd\") " pod="openstack/glance-6878-account-create-update-w2x65" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.642544 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88pvs\" (UniqueName: \"kubernetes.io/projected/c6903227-0287-490a-b643-b9eecab193dd-kube-api-access-88pvs\") pod \"glance-6878-account-create-update-w2x65\" (UID: \"c6903227-0287-490a-b643-b9eecab193dd\") " pod="openstack/glance-6878-account-create-update-w2x65" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.642606 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-299dt\" (UniqueName: \"kubernetes.io/projected/5603bece-14a8-4764-8a38-000aa3ea0199-kube-api-access-299dt\") pod \"glance-db-create-csvr8\" (UID: \"5603bece-14a8-4764-8a38-000aa3ea0199\") " pod="openstack/glance-db-create-csvr8" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.642674 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5603bece-14a8-4764-8a38-000aa3ea0199-operator-scripts\") pod \"glance-db-create-csvr8\" (UID: \"5603bece-14a8-4764-8a38-000aa3ea0199\") " pod="openstack/glance-db-create-csvr8" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.646781 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5603bece-14a8-4764-8a38-000aa3ea0199-operator-scripts\") pod \"glance-db-create-csvr8\" (UID: \"5603bece-14a8-4764-8a38-000aa3ea0199\") " pod="openstack/glance-db-create-csvr8" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.694276 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-299dt\" (UniqueName: \"kubernetes.io/projected/5603bece-14a8-4764-8a38-000aa3ea0199-kube-api-access-299dt\") pod \"glance-db-create-csvr8\" (UID: \"5603bece-14a8-4764-8a38-000aa3ea0199\") " pod="openstack/glance-db-create-csvr8" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.736829 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"e09f6bb9c7d5c8371f15fabee2ac98c3b57ac49c1d516143bec294474e6cbd6f"} Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.744932 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88pvs\" (UniqueName: \"kubernetes.io/projected/c6903227-0287-490a-b643-b9eecab193dd-kube-api-access-88pvs\") pod \"glance-6878-account-create-update-w2x65\" (UID: \"c6903227-0287-490a-b643-b9eecab193dd\") " pod="openstack/glance-6878-account-create-update-w2x65" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.745284 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6903227-0287-490a-b643-b9eecab193dd-operator-scripts\") pod \"glance-6878-account-create-update-w2x65\" (UID: \"c6903227-0287-490a-b643-b9eecab193dd\") " pod="openstack/glance-6878-account-create-update-w2x65" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.746018 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6903227-0287-490a-b643-b9eecab193dd-operator-scripts\") pod \"glance-6878-account-create-update-w2x65\" (UID: \"c6903227-0287-490a-b643-b9eecab193dd\") " pod="openstack/glance-6878-account-create-update-w2x65" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.786723 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88pvs\" (UniqueName: \"kubernetes.io/projected/c6903227-0287-490a-b643-b9eecab193dd-kube-api-access-88pvs\") pod \"glance-6878-account-create-update-w2x65\" (UID: \"c6903227-0287-490a-b643-b9eecab193dd\") " pod="openstack/glance-6878-account-create-update-w2x65" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.808597 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d876c2f3-479b-491c-9733-a774bc11004d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d876c2f3-479b-491c-9733-a774bc11004d\") pod \"prometheus-metric-storage-0\" (UID: \"4ad7f090-9b35-4a85-86d5-1763f234a768\") " pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.865389 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-csvr8" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.896309 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6878-account-create-update-w2x65" Nov 27 17:30:21 crc kubenswrapper[4792]: I1127 17:30:21.912001 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3ac4-account-create-update-46n8p"] Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.063213 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-j9j8r"] Nov 27 17:30:22 crc kubenswrapper[4792]: W1127 17:30:22.070840 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90c5f747_a058_4b9d_bc56_54a80f7a980c.slice/crio-f70e55cb23f160d73ec065d31ca67a8d36022f30b084485f428b358ec4d92046 WatchSource:0}: Error finding container f70e55cb23f160d73ec065d31ca67a8d36022f30b084485f428b358ec4d92046: Status 404 returned error can't find the container with id f70e55cb23f160d73ec065d31ca67a8d36022f30b084485f428b358ec4d92046 Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.103563 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.231631 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-r24r2"] Nov 27 17:30:22 crc kubenswrapper[4792]: W1127 17:30:22.340440 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c49a20e_fc3b_4a3a_8a88_a300599bc1d6.slice/crio-3b5cbcdf62a1d97c38c708327ac4eddb4c2fbb75ff6f1b008f4f8e85ab40d88a WatchSource:0}: Error finding container 3b5cbcdf62a1d97c38c708327ac4eddb4c2fbb75ff6f1b008f4f8e85ab40d88a: Status 404 returned error can't find the container with id 3b5cbcdf62a1d97c38c708327ac4eddb4c2fbb75ff6f1b008f4f8e85ab40d88a Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.377701 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f" Nov 27 17:30:22 crc kubenswrapper[4792]: W1127 17:30:22.382043 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0033423_6c17_45fe_8356_e655c26af92b.slice/crio-8437590d79f540504bf02a8c0cd64e9364e214d1c26dd715872dcea1f7972239 WatchSource:0}: Error finding container 8437590d79f540504bf02a8c0cd64e9364e214d1c26dd715872dcea1f7972239: Status 404 returned error can't find the container with id 8437590d79f540504bf02a8c0cd64e9364e214d1c26dd715872dcea1f7972239 Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.405717 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8b76-account-create-update-wb5pq"] Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.408243 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b4b8-account-create-update-xdc22" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.465798 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbjff\" (UniqueName: \"kubernetes.io/projected/9c410408-137e-4436-b401-2e9b2de55ac1-kube-api-access-tbjff\") pod \"9c410408-137e-4436-b401-2e9b2de55ac1\" (UID: \"9c410408-137e-4436-b401-2e9b2de55ac1\") " Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.465911 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c410408-137e-4436-b401-2e9b2de55ac1-operator-scripts\") pod \"9c410408-137e-4436-b401-2e9b2de55ac1\" (UID: \"9c410408-137e-4436-b401-2e9b2de55ac1\") " Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.470230 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c410408-137e-4436-b401-2e9b2de55ac1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c410408-137e-4436-b401-2e9b2de55ac1" (UID: "9c410408-137e-4436-b401-2e9b2de55ac1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.491143 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c410408-137e-4436-b401-2e9b2de55ac1-kube-api-access-tbjff" (OuterVolumeSpecName: "kube-api-access-tbjff") pod "9c410408-137e-4436-b401-2e9b2de55ac1" (UID: "9c410408-137e-4436-b401-2e9b2de55ac1"). InnerVolumeSpecName "kube-api-access-tbjff". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.536538 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6878-account-create-update-w2x65"] Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.547731 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-csvr8"] Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.568392 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxnqt\" (UniqueName: \"kubernetes.io/projected/de1907ef-3b87-4899-a1f7-75096bb1b94e-kube-api-access-gxnqt\") pod \"de1907ef-3b87-4899-a1f7-75096bb1b94e\" (UID: \"de1907ef-3b87-4899-a1f7-75096bb1b94e\") " Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.568697 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de1907ef-3b87-4899-a1f7-75096bb1b94e-operator-scripts\") pod \"de1907ef-3b87-4899-a1f7-75096bb1b94e\" (UID: \"de1907ef-3b87-4899-a1f7-75096bb1b94e\") " Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.569269 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbjff\" (UniqueName: \"kubernetes.io/projected/9c410408-137e-4436-b401-2e9b2de55ac1-kube-api-access-tbjff\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.569300 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c410408-137e-4436-b401-2e9b2de55ac1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.570315 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1907ef-3b87-4899-a1f7-75096bb1b94e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de1907ef-3b87-4899-a1f7-75096bb1b94e" (UID: "de1907ef-3b87-4899-a1f7-75096bb1b94e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.585360 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1907ef-3b87-4899-a1f7-75096bb1b94e-kube-api-access-gxnqt" (OuterVolumeSpecName: "kube-api-access-gxnqt") pod "de1907ef-3b87-4899-a1f7-75096bb1b94e" (UID: "de1907ef-3b87-4899-a1f7-75096bb1b94e"). InnerVolumeSpecName "kube-api-access-gxnqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.671038 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de1907ef-3b87-4899-a1f7-75096bb1b94e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.671069 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxnqt\" (UniqueName: \"kubernetes.io/projected/de1907ef-3b87-4899-a1f7-75096bb1b94e-kube-api-access-gxnqt\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.712064 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2af964c3-1de4-48af-a89c-df58527be8cb" path="/var/lib/kubelet/pods/2af964c3-1de4-48af-a89c-df58527be8cb/volumes" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.770316 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b4b8-account-create-update-xdc22" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.770319 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b4b8-account-create-update-xdc22" event={"ID":"de1907ef-3b87-4899-a1f7-75096bb1b94e","Type":"ContainerDied","Data":"e68ad7618528694891da426be5075aa812a57689828fba9378195a7c30d44251"} Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.770717 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e68ad7618528694891da426be5075aa812a57689828fba9378195a7c30d44251" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.771949 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-csvr8" event={"ID":"5603bece-14a8-4764-8a38-000aa3ea0199","Type":"ContainerStarted","Data":"336c07bf58351b6a2009269c80e45d346fd3eb4c842e167c39126d2fd4e3a836"} Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.774863 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-r24r2" event={"ID":"8c49a20e-fc3b-4a3a-8a88-a300599bc1d6","Type":"ContainerStarted","Data":"6023424e6aa4f78400771f16622a3e528b01d0e970d750fc8b8860502279190a"} Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.774904 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-r24r2" event={"ID":"8c49a20e-fc3b-4a3a-8a88-a300599bc1d6","Type":"ContainerStarted","Data":"3b5cbcdf62a1d97c38c708327ac4eddb4c2fbb75ff6f1b008f4f8e85ab40d88a"} Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.779747 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8b76-account-create-update-wb5pq" event={"ID":"f0033423-6c17-45fe-8356-e655c26af92b","Type":"ContainerStarted","Data":"5c536dda591376999072eaf5f89bb031397384bcad5a932a7775be4aafc7d367"} Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.779787 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8b76-account-create-update-wb5pq" event={"ID":"f0033423-6c17-45fe-8356-e655c26af92b","Type":"ContainerStarted","Data":"8437590d79f540504bf02a8c0cd64e9364e214d1c26dd715872dcea1f7972239"} Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.785903 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f" event={"ID":"9c410408-137e-4436-b401-2e9b2de55ac1","Type":"ContainerDied","Data":"5b9d78e530e1bae7e27dbe1b2f0a7c1146b47a35f448abb654b27324479083ac"} Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.785941 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b9d78e530e1bae7e27dbe1b2f0a7c1146b47a35f448abb654b27324479083ac" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.786008 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.793767 4792 generic.go:334] "Generic (PLEG): container finished" podID="2bda37ed-63e8-4cdc-98c9-50025ead6629" containerID="441ab28921b66a6e552cad9b6a59176d4e86140e97996262282f2955130d87a9" exitCode=0 Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.793819 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3ac4-account-create-update-46n8p" event={"ID":"2bda37ed-63e8-4cdc-98c9-50025ead6629","Type":"ContainerDied","Data":"441ab28921b66a6e552cad9b6a59176d4e86140e97996262282f2955130d87a9"} Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.793864 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3ac4-account-create-update-46n8p" event={"ID":"2bda37ed-63e8-4cdc-98c9-50025ead6629","Type":"ContainerStarted","Data":"0d66c9a440bd349d164402799657d98a874805668e7e0692918d37a7e4137742"} Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.795449 4792 generic.go:334] "Generic (PLEG): container finished" podID="90c5f747-a058-4b9d-bc56-54a80f7a980c" containerID="5aaad17c3dbf3bb931fa3c3ca7e72c3e096756eb0874d2cc6ce445e1c617a0a2" exitCode=0 Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.795500 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-j9j8r" event={"ID":"90c5f747-a058-4b9d-bc56-54a80f7a980c","Type":"ContainerDied","Data":"5aaad17c3dbf3bb931fa3c3ca7e72c3e096756eb0874d2cc6ce445e1c617a0a2"} Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.795735 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-j9j8r" event={"ID":"90c5f747-a058-4b9d-bc56-54a80f7a980c","Type":"ContainerStarted","Data":"f70e55cb23f160d73ec065d31ca67a8d36022f30b084485f428b358ec4d92046"} Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.798106 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"e153adf78683392bb9599bcf3b13e39af257bce7da1b2bf8d0719d9bbe614d6c"} Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.798139 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"da1475eb6b18b820fccae3586a4dd8fbedf5f82eb43916fda98f58f11fb70a79"} Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.798152 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"f92ccc991fe3d9f97d09c3f496e6b96ff5cf49dfc3ce34d1e82869adf90bd9e4"} Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.803534 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-r24r2" podStartSLOduration=2.803512078 podStartE2EDuration="2.803512078s" podCreationTimestamp="2025-11-27 17:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:30:22.792297909 +0000 UTC m=+1245.135124217" watchObservedRunningTime="2025-11-27 17:30:22.803512078 +0000 UTC m=+1245.146338406" Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.805366 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6878-account-create-update-w2x65" event={"ID":"c6903227-0287-490a-b643-b9eecab193dd","Type":"ContainerStarted","Data":"c2ceddcb69e1e0d19b021cbdadbd2b97d64478cc00489d7c9dde297b7e929c69"} Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.827207 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 27 17:30:22 crc kubenswrapper[4792]: I1127 17:30:22.839488 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8b76-account-create-update-wb5pq" podStartSLOduration=2.839464254 podStartE2EDuration="2.839464254s" podCreationTimestamp="2025-11-27 17:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:30:22.815317272 +0000 UTC m=+1245.158143590" watchObservedRunningTime="2025-11-27 17:30:22.839464254 +0000 UTC m=+1245.182290572" Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.756185 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 27 17:30:23 crc kubenswrapper[4792]: E1127 17:30:23.758075 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c410408-137e-4436-b401-2e9b2de55ac1" containerName="mariadb-database-create" Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.758376 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c410408-137e-4436-b401-2e9b2de55ac1" containerName="mariadb-database-create" Nov 27 17:30:23 crc kubenswrapper[4792]: E1127 17:30:23.758480 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1907ef-3b87-4899-a1f7-75096bb1b94e" containerName="mariadb-account-create-update" Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.758587 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1907ef-3b87-4899-a1f7-75096bb1b94e" containerName="mariadb-account-create-update" Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.758972 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c410408-137e-4436-b401-2e9b2de55ac1" containerName="mariadb-database-create" Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.759060 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1907ef-3b87-4899-a1f7-75096bb1b94e" containerName="mariadb-account-create-update" Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.760013 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.763780 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.769127 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.827958 4792 generic.go:334] "Generic (PLEG): container finished" podID="f0033423-6c17-45fe-8356-e655c26af92b" containerID="5c536dda591376999072eaf5f89bb031397384bcad5a932a7775be4aafc7d367" exitCode=0 Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.828064 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8b76-account-create-update-wb5pq" event={"ID":"f0033423-6c17-45fe-8356-e655c26af92b","Type":"ContainerDied","Data":"5c536dda591376999072eaf5f89bb031397384bcad5a932a7775be4aafc7d367"} Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.830469 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ad7f090-9b35-4a85-86d5-1763f234a768","Type":"ContainerStarted","Data":"29b026eb5ed76d246de20863fb518ad878d61ce11b77c0e4cf91643308b1bca3"} Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.846054 4792 generic.go:334] "Generic (PLEG): container finished" podID="c6903227-0287-490a-b643-b9eecab193dd" containerID="e8b47678d5b1e188349bc1cc86a92649caa5707ffbee33812c3aef91ea901908" exitCode=0 Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.846522 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6878-account-create-update-w2x65" event={"ID":"c6903227-0287-490a-b643-b9eecab193dd","Type":"ContainerDied","Data":"e8b47678d5b1e188349bc1cc86a92649caa5707ffbee33812c3aef91ea901908"} Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.850784 4792 generic.go:334] "Generic (PLEG): container finished" podID="5603bece-14a8-4764-8a38-000aa3ea0199" containerID="03aa41992e6a52bc497dbd00daf24b7207765571dc9f9a0d6535247263d72817" exitCode=0 Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.850851 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-csvr8" event={"ID":"5603bece-14a8-4764-8a38-000aa3ea0199","Type":"ContainerDied","Data":"03aa41992e6a52bc497dbd00daf24b7207765571dc9f9a0d6535247263d72817"} Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.852639 4792 generic.go:334] "Generic (PLEG): container finished" podID="8c49a20e-fc3b-4a3a-8a88-a300599bc1d6" containerID="6023424e6aa4f78400771f16622a3e528b01d0e970d750fc8b8860502279190a" exitCode=0 Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.853022 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-r24r2" event={"ID":"8c49a20e-fc3b-4a3a-8a88-a300599bc1d6","Type":"ContainerDied","Data":"6023424e6aa4f78400771f16622a3e528b01d0e970d750fc8b8860502279190a"} Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.898049 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkfcn\" (UniqueName: \"kubernetes.io/projected/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-kube-api-access-rkfcn\") pod \"mysqld-exporter-0\" (UID: \"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5\") " pod="openstack/mysqld-exporter-0" Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.898119 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-config-data\") pod \"mysqld-exporter-0\" (UID: \"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5\") " pod="openstack/mysqld-exporter-0" Nov 27 17:30:23 crc kubenswrapper[4792]: I1127 17:30:23.898209 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5\") " pod="openstack/mysqld-exporter-0" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.010102 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5\") " pod="openstack/mysqld-exporter-0" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.010279 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkfcn\" (UniqueName: \"kubernetes.io/projected/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-kube-api-access-rkfcn\") pod \"mysqld-exporter-0\" (UID: \"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5\") " pod="openstack/mysqld-exporter-0" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.010330 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-config-data\") pod \"mysqld-exporter-0\" (UID: \"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5\") " pod="openstack/mysqld-exporter-0" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.015946 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-config-data\") pod \"mysqld-exporter-0\" (UID: \"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5\") " pod="openstack/mysqld-exporter-0" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.032726 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkfcn\" (UniqueName: \"kubernetes.io/projected/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-kube-api-access-rkfcn\") pod \"mysqld-exporter-0\" (UID: \"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5\") " pod="openstack/mysqld-exporter-0" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.033491 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5\") " pod="openstack/mysqld-exporter-0" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.083948 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.592827 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ac4-account-create-update-46n8p" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.663809 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-j9j8r" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.746778 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgrnl\" (UniqueName: \"kubernetes.io/projected/2bda37ed-63e8-4cdc-98c9-50025ead6629-kube-api-access-lgrnl\") pod \"2bda37ed-63e8-4cdc-98c9-50025ead6629\" (UID: \"2bda37ed-63e8-4cdc-98c9-50025ead6629\") " Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.746974 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bda37ed-63e8-4cdc-98c9-50025ead6629-operator-scripts\") pod \"2bda37ed-63e8-4cdc-98c9-50025ead6629\" (UID: \"2bda37ed-63e8-4cdc-98c9-50025ead6629\") " Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.748004 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bda37ed-63e8-4cdc-98c9-50025ead6629-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2bda37ed-63e8-4cdc-98c9-50025ead6629" (UID: "2bda37ed-63e8-4cdc-98c9-50025ead6629"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.755348 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bda37ed-63e8-4cdc-98c9-50025ead6629-kube-api-access-lgrnl" (OuterVolumeSpecName: "kube-api-access-lgrnl") pod "2bda37ed-63e8-4cdc-98c9-50025ead6629" (UID: "2bda37ed-63e8-4cdc-98c9-50025ead6629"). InnerVolumeSpecName "kube-api-access-lgrnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.848363 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9557q\" (UniqueName: \"kubernetes.io/projected/90c5f747-a058-4b9d-bc56-54a80f7a980c-kube-api-access-9557q\") pod \"90c5f747-a058-4b9d-bc56-54a80f7a980c\" (UID: \"90c5f747-a058-4b9d-bc56-54a80f7a980c\") " Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.848787 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90c5f747-a058-4b9d-bc56-54a80f7a980c-operator-scripts\") pod \"90c5f747-a058-4b9d-bc56-54a80f7a980c\" (UID: \"90c5f747-a058-4b9d-bc56-54a80f7a980c\") " Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.849379 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgrnl\" (UniqueName: \"kubernetes.io/projected/2bda37ed-63e8-4cdc-98c9-50025ead6629-kube-api-access-lgrnl\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.849392 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bda37ed-63e8-4cdc-98c9-50025ead6629-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.849437 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90c5f747-a058-4b9d-bc56-54a80f7a980c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90c5f747-a058-4b9d-bc56-54a80f7a980c" (UID: "90c5f747-a058-4b9d-bc56-54a80f7a980c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.853134 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90c5f747-a058-4b9d-bc56-54a80f7a980c-kube-api-access-9557q" (OuterVolumeSpecName: "kube-api-access-9557q") pod "90c5f747-a058-4b9d-bc56-54a80f7a980c" (UID: "90c5f747-a058-4b9d-bc56-54a80f7a980c"). InnerVolumeSpecName "kube-api-access-9557q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.863363 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"94ea29b16032003d05b888c5d0db03254b4c0ccc4dc397d5c722bfe5255a2706"} Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.864832 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3ac4-account-create-update-46n8p" event={"ID":"2bda37ed-63e8-4cdc-98c9-50025ead6629","Type":"ContainerDied","Data":"0d66c9a440bd349d164402799657d98a874805668e7e0692918d37a7e4137742"} Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.864850 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d66c9a440bd349d164402799657d98a874805668e7e0692918d37a7e4137742" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.864890 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ac4-account-create-update-46n8p" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.881616 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-j9j8r" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.884000 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-j9j8r" event={"ID":"90c5f747-a058-4b9d-bc56-54a80f7a980c","Type":"ContainerDied","Data":"f70e55cb23f160d73ec065d31ca67a8d36022f30b084485f428b358ec4d92046"} Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.884043 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f70e55cb23f160d73ec065d31ca67a8d36022f30b084485f428b358ec4d92046" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.951630 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9557q\" (UniqueName: \"kubernetes.io/projected/90c5f747-a058-4b9d-bc56-54a80f7a980c-kube-api-access-9557q\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:24 crc kubenswrapper[4792]: I1127 17:30:24.951766 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90c5f747-a058-4b9d-bc56-54a80f7a980c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.024474 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 27 17:30:25 crc kubenswrapper[4792]: W1127 17:30:25.077610 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a89ac26_e7c6_4cac_b67e_7e7cdfd8c7c5.slice/crio-3140ebf73b816a6cba43394ba0f8afc9aaae3802018332f739510cd583a42aee WatchSource:0}: Error finding container 3140ebf73b816a6cba43394ba0f8afc9aaae3802018332f739510cd583a42aee: Status 404 returned error can't find the container with id 3140ebf73b816a6cba43394ba0f8afc9aaae3802018332f739510cd583a42aee Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.363165 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6878-account-create-update-w2x65" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.481632 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6903227-0287-490a-b643-b9eecab193dd-operator-scripts\") pod \"c6903227-0287-490a-b643-b9eecab193dd\" (UID: \"c6903227-0287-490a-b643-b9eecab193dd\") " Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.481871 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88pvs\" (UniqueName: \"kubernetes.io/projected/c6903227-0287-490a-b643-b9eecab193dd-kube-api-access-88pvs\") pod \"c6903227-0287-490a-b643-b9eecab193dd\" (UID: \"c6903227-0287-490a-b643-b9eecab193dd\") " Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.484171 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6903227-0287-490a-b643-b9eecab193dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6903227-0287-490a-b643-b9eecab193dd" (UID: "c6903227-0287-490a-b643-b9eecab193dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.584294 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6903227-0287-490a-b643-b9eecab193dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.682788 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6903227-0287-490a-b643-b9eecab193dd-kube-api-access-88pvs" (OuterVolumeSpecName: "kube-api-access-88pvs") pod "c6903227-0287-490a-b643-b9eecab193dd" (UID: "c6903227-0287-490a-b643-b9eecab193dd"). InnerVolumeSpecName "kube-api-access-88pvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.685742 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88pvs\" (UniqueName: \"kubernetes.io/projected/c6903227-0287-490a-b643-b9eecab193dd-kube-api-access-88pvs\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.708125 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8b76-account-create-update-wb5pq" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.715080 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r24r2" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.728445 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-csvr8" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.786590 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szpbb\" (UniqueName: \"kubernetes.io/projected/f0033423-6c17-45fe-8356-e655c26af92b-kube-api-access-szpbb\") pod \"f0033423-6c17-45fe-8356-e655c26af92b\" (UID: \"f0033423-6c17-45fe-8356-e655c26af92b\") " Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.786705 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0033423-6c17-45fe-8356-e655c26af92b-operator-scripts\") pod \"f0033423-6c17-45fe-8356-e655c26af92b\" (UID: \"f0033423-6c17-45fe-8356-e655c26af92b\") " Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.787207 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0033423-6c17-45fe-8356-e655c26af92b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0033423-6c17-45fe-8356-e655c26af92b" (UID: "f0033423-6c17-45fe-8356-e655c26af92b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.876064 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0033423-6c17-45fe-8356-e655c26af92b-kube-api-access-szpbb" (OuterVolumeSpecName: "kube-api-access-szpbb") pod "f0033423-6c17-45fe-8356-e655c26af92b" (UID: "f0033423-6c17-45fe-8356-e655c26af92b"). InnerVolumeSpecName "kube-api-access-szpbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.888323 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r76z\" (UniqueName: \"kubernetes.io/projected/8c49a20e-fc3b-4a3a-8a88-a300599bc1d6-kube-api-access-2r76z\") pod \"8c49a20e-fc3b-4a3a-8a88-a300599bc1d6\" (UID: \"8c49a20e-fc3b-4a3a-8a88-a300599bc1d6\") " Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.888402 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-299dt\" (UniqueName: \"kubernetes.io/projected/5603bece-14a8-4764-8a38-000aa3ea0199-kube-api-access-299dt\") pod \"5603bece-14a8-4764-8a38-000aa3ea0199\" (UID: \"5603bece-14a8-4764-8a38-000aa3ea0199\") " Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.888538 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c49a20e-fc3b-4a3a-8a88-a300599bc1d6-operator-scripts\") pod \"8c49a20e-fc3b-4a3a-8a88-a300599bc1d6\" (UID: \"8c49a20e-fc3b-4a3a-8a88-a300599bc1d6\") " Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.888581 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5603bece-14a8-4764-8a38-000aa3ea0199-operator-scripts\") pod \"5603bece-14a8-4764-8a38-000aa3ea0199\" (UID: \"5603bece-14a8-4764-8a38-000aa3ea0199\") " Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.892003 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c49a20e-fc3b-4a3a-8a88-a300599bc1d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c49a20e-fc3b-4a3a-8a88-a300599bc1d6" (UID: "8c49a20e-fc3b-4a3a-8a88-a300599bc1d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.892160 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5603bece-14a8-4764-8a38-000aa3ea0199-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5603bece-14a8-4764-8a38-000aa3ea0199" (UID: "5603bece-14a8-4764-8a38-000aa3ea0199"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.893152 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5603bece-14a8-4764-8a38-000aa3ea0199-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.893177 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szpbb\" (UniqueName: \"kubernetes.io/projected/f0033423-6c17-45fe-8356-e655c26af92b-kube-api-access-szpbb\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.893187 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0033423-6c17-45fe-8356-e655c26af92b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.893998 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5603bece-14a8-4764-8a38-000aa3ea0199-kube-api-access-299dt" (OuterVolumeSpecName: "kube-api-access-299dt") pod "5603bece-14a8-4764-8a38-000aa3ea0199" (UID: "5603bece-14a8-4764-8a38-000aa3ea0199"). InnerVolumeSpecName "kube-api-access-299dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.896566 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c49a20e-fc3b-4a3a-8a88-a300599bc1d6-kube-api-access-2r76z" (OuterVolumeSpecName: "kube-api-access-2r76z") pod "8c49a20e-fc3b-4a3a-8a88-a300599bc1d6" (UID: "8c49a20e-fc3b-4a3a-8a88-a300599bc1d6"). InnerVolumeSpecName "kube-api-access-2r76z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.896888 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"d18865531eebb981e393119927388f426b05b0b447fd8ce923568877b8f4b390"} Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.896938 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"7fa1e79440cb4fbef83f9ad9c899aec3eb5012eeaca1f36bb31ffe67c67126c6"} Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.896946 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"31e3795fc01543490c8a5bd555789b888867f627cae0af450788e3ce5a5e95c2"} Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.898765 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6878-account-create-update-w2x65" event={"ID":"c6903227-0287-490a-b643-b9eecab193dd","Type":"ContainerDied","Data":"c2ceddcb69e1e0d19b021cbdadbd2b97d64478cc00489d7c9dde297b7e929c69"} Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.898788 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ceddcb69e1e0d19b021cbdadbd2b97d64478cc00489d7c9dde297b7e929c69" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.898769 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6878-account-create-update-w2x65" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.901327 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-csvr8" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.901703 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-csvr8" event={"ID":"5603bece-14a8-4764-8a38-000aa3ea0199","Type":"ContainerDied","Data":"336c07bf58351b6a2009269c80e45d346fd3eb4c842e167c39126d2fd4e3a836"} Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.901731 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="336c07bf58351b6a2009269c80e45d346fd3eb4c842e167c39126d2fd4e3a836" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.903074 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-r24r2" event={"ID":"8c49a20e-fc3b-4a3a-8a88-a300599bc1d6","Type":"ContainerDied","Data":"3b5cbcdf62a1d97c38c708327ac4eddb4c2fbb75ff6f1b008f4f8e85ab40d88a"} Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.903095 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b5cbcdf62a1d97c38c708327ac4eddb4c2fbb75ff6f1b008f4f8e85ab40d88a" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.903100 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r24r2" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.904455 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8b76-account-create-update-wb5pq" event={"ID":"f0033423-6c17-45fe-8356-e655c26af92b","Type":"ContainerDied","Data":"8437590d79f540504bf02a8c0cd64e9364e214d1c26dd715872dcea1f7972239"} Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.904482 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8437590d79f540504bf02a8c0cd64e9364e214d1c26dd715872dcea1f7972239" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.904521 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8b76-account-create-update-wb5pq" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.913815 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5","Type":"ContainerStarted","Data":"3140ebf73b816a6cba43394ba0f8afc9aaae3802018332f739510cd583a42aee"} Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.995964 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r76z\" (UniqueName: \"kubernetes.io/projected/8c49a20e-fc3b-4a3a-8a88-a300599bc1d6-kube-api-access-2r76z\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.996064 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-299dt\" (UniqueName: \"kubernetes.io/projected/5603bece-14a8-4764-8a38-000aa3ea0199-kube-api-access-299dt\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:25 crc kubenswrapper[4792]: I1127 17:30:25.996084 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c49a20e-fc3b-4a3a-8a88-a300599bc1d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:26 crc kubenswrapper[4792]: I1127 17:30:26.925982 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5","Type":"ContainerStarted","Data":"9691adb05de04852dd6a5022c34491f5af95bd0040d089920e35ed5b2e6e0b31"} Nov 27 17:30:26 crc kubenswrapper[4792]: I1127 17:30:26.929195 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ad7f090-9b35-4a85-86d5-1763f234a768","Type":"ContainerStarted","Data":"0900f4eba72b03f63f2e3634b63d7fbce8d23dbade0b3752f2dc44458812f4e1"} Nov 27 17:30:26 crc kubenswrapper[4792]: I1127 17:30:26.943616 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.379420944 podStartE2EDuration="3.943592799s" podCreationTimestamp="2025-11-27 17:30:23 +0000 UTC" firstStartedPulling="2025-11-27 17:30:25.079253938 +0000 UTC m=+1247.422080256" lastFinishedPulling="2025-11-27 17:30:26.643425773 +0000 UTC m=+1248.986252111" observedRunningTime="2025-11-27 17:30:26.938919883 +0000 UTC m=+1249.281746221" watchObservedRunningTime="2025-11-27 17:30:26.943592799 +0000 UTC m=+1249.286419117" Nov 27 17:30:28 crc kubenswrapper[4792]: I1127 17:30:28.965464 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"2d23fdfbc8d6d412f7b73410e8f0185b7d71dcc85ecbffe55233da261ed9ba1d"} Nov 27 17:30:28 crc kubenswrapper[4792]: I1127 17:30:28.966109 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"3d3785e706de0e8e08b0ff1bb951295acb89e49dc8b1c986b2f5d886d9cdffe7"} Nov 27 17:30:28 crc kubenswrapper[4792]: I1127 17:30:28.966121 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"ffa18b1dfd19c95bb9d8c8291ea0f61fd9ff138cfb64157a1ba9a300c1e53113"} Nov 27 17:30:28 crc kubenswrapper[4792]: I1127 17:30:28.966150 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"37ef3aab08fb7ff5fe1a1b83635ce8f7d4e81779a741422cf37d93beefb3c4c8"} Nov 27 17:30:28 crc kubenswrapper[4792]: I1127 17:30:28.966161 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"389ee0d666d759577b02df1cf7f7ab44fe2e4b5f7b6119fd4772c3c00fc0f42b"} Nov 27 17:30:29 crc kubenswrapper[4792]: I1127 17:30:29.986961 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"2bcd38dbe674d7e57f1ac850aabe6705671083c780c63961c67868610cc3e379"} Nov 27 17:30:29 crc kubenswrapper[4792]: I1127 17:30:29.987327 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b7d63be6-0f2b-4b86-abec-4576d23792a9","Type":"ContainerStarted","Data":"8723ddceebd45217f97b3abbf9d564d76de802e6f70ad183786432f4c6f2ef2d"} Nov 27 17:30:29 crc kubenswrapper[4792]: I1127 17:30:29.989104 4792 generic.go:334] "Generic (PLEG): container finished" podID="27d6022e-eea3-41e9-b880-620328dc5d78" containerID="8fffed7f25cc826dced9350fe59c2b2b5794322a9e9024a84355b647529d07bd" exitCode=0 Nov 27 17:30:29 crc kubenswrapper[4792]: I1127 17:30:29.989211 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27d6022e-eea3-41e9-b880-620328dc5d78","Type":"ContainerDied","Data":"8fffed7f25cc826dced9350fe59c2b2b5794322a9e9024a84355b647529d07bd"} Nov 27 17:30:29 crc kubenswrapper[4792]: I1127 17:30:29.992275 4792 generic.go:334] "Generic (PLEG): container finished" podID="dbbf8d9a-2069-4544-92db-ad5174339775" containerID="7b0b51a3568b0257fd59b44b84fcb2226603c94271cc99981af94095c140c28e" exitCode=0 Nov 27 17:30:29 crc kubenswrapper[4792]: I1127 17:30:29.992342 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbbf8d9a-2069-4544-92db-ad5174339775","Type":"ContainerDied","Data":"7b0b51a3568b0257fd59b44b84fcb2226603c94271cc99981af94095c140c28e"} Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.092257 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.737775389 podStartE2EDuration="47.092238745s" podCreationTimestamp="2025-11-27 17:29:43 +0000 UTC" firstStartedPulling="2025-11-27 17:30:17.318262867 +0000 UTC m=+1239.661089185" lastFinishedPulling="2025-11-27 17:30:27.672726223 +0000 UTC m=+1250.015552541" observedRunningTime="2025-11-27 17:30:30.05352886 +0000 UTC m=+1252.396355208" watchObservedRunningTime="2025-11-27 17:30:30.092238745 +0000 UTC m=+1252.435065063" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.355607 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-n2w4b"] Nov 27 17:30:30 crc kubenswrapper[4792]: E1127 17:30:30.356430 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90c5f747-a058-4b9d-bc56-54a80f7a980c" containerName="mariadb-database-create" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.356497 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="90c5f747-a058-4b9d-bc56-54a80f7a980c" containerName="mariadb-database-create" Nov 27 17:30:30 crc kubenswrapper[4792]: E1127 17:30:30.356561 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6903227-0287-490a-b643-b9eecab193dd" containerName="mariadb-account-create-update" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.356617 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6903227-0287-490a-b643-b9eecab193dd" containerName="mariadb-account-create-update" Nov 27 17:30:30 crc kubenswrapper[4792]: E1127 17:30:30.356731 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0033423-6c17-45fe-8356-e655c26af92b" containerName="mariadb-account-create-update" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.356786 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0033423-6c17-45fe-8356-e655c26af92b" containerName="mariadb-account-create-update" Nov 27 17:30:30 crc kubenswrapper[4792]: E1127 17:30:30.356837 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5603bece-14a8-4764-8a38-000aa3ea0199" containerName="mariadb-database-create" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.356890 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5603bece-14a8-4764-8a38-000aa3ea0199" containerName="mariadb-database-create" Nov 27 17:30:30 crc kubenswrapper[4792]: E1127 17:30:30.356956 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bda37ed-63e8-4cdc-98c9-50025ead6629" containerName="mariadb-account-create-update" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.357015 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bda37ed-63e8-4cdc-98c9-50025ead6629" containerName="mariadb-account-create-update" Nov 27 17:30:30 crc kubenswrapper[4792]: E1127 17:30:30.357099 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c49a20e-fc3b-4a3a-8a88-a300599bc1d6" containerName="mariadb-database-create" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.357153 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c49a20e-fc3b-4a3a-8a88-a300599bc1d6" containerName="mariadb-database-create" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.357433 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0033423-6c17-45fe-8356-e655c26af92b" containerName="mariadb-account-create-update" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.357511 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c49a20e-fc3b-4a3a-8a88-a300599bc1d6" containerName="mariadb-database-create" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.357565 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6903227-0287-490a-b643-b9eecab193dd" containerName="mariadb-account-create-update" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.357629 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="5603bece-14a8-4764-8a38-000aa3ea0199" containerName="mariadb-database-create" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.358726 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bda37ed-63e8-4cdc-98c9-50025ead6629" containerName="mariadb-account-create-update" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.358796 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="90c5f747-a058-4b9d-bc56-54a80f7a980c" containerName="mariadb-database-create" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.359980 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.363328 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.388029 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-n2w4b"] Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.426148 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.426210 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.426267 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.426381 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-config\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.426404 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.426424 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trtf6\" (UniqueName: \"kubernetes.io/projected/7c639620-d012-4fb1-851f-2316fb8c51bc-kube-api-access-trtf6\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.528717 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-config\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.528771 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.528798 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trtf6\" (UniqueName: \"kubernetes.io/projected/7c639620-d012-4fb1-851f-2316fb8c51bc-kube-api-access-trtf6\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.528863 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.528909 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.528941 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.530027 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.530779 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-config\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.531453 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.532518 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.533259 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.548727 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trtf6\" (UniqueName: \"kubernetes.io/projected/7c639620-d012-4fb1-851f-2316fb8c51bc-kube-api-access-trtf6\") pod \"dnsmasq-dns-6d5b6d6b67-n2w4b\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:30 crc kubenswrapper[4792]: I1127 17:30:30.682993 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.024019 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbbf8d9a-2069-4544-92db-ad5174339775","Type":"ContainerStarted","Data":"eb19d94a0bd842d00dcd04f1e391501497c7d4035c53f569b620e977505a3609"} Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.024544 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.029595 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27d6022e-eea3-41e9-b880-620328dc5d78","Type":"ContainerStarted","Data":"8659e37a39916a91f8b179786828202562ff0282105e9f9d5ccdfcbc85122bf8"} Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.029797 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.063396 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.063317233 podStartE2EDuration="1m26.063371896s" podCreationTimestamp="2025-11-27 17:29:05 +0000 UTC" firstStartedPulling="2025-11-27 17:29:07.950467669 +0000 UTC m=+1170.293293987" lastFinishedPulling="2025-11-27 17:29:55.950522332 +0000 UTC m=+1218.293348650" observedRunningTime="2025-11-27 17:30:31.056067924 +0000 UTC m=+1253.398894242" watchObservedRunningTime="2025-11-27 17:30:31.063371896 +0000 UTC m=+1253.406198214" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.149275 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.421008753 podStartE2EDuration="1m25.149250645s" podCreationTimestamp="2025-11-27 17:29:06 +0000 UTC" firstStartedPulling="2025-11-27 17:29:08.24719639 +0000 UTC m=+1170.590022708" lastFinishedPulling="2025-11-27 17:29:55.975438282 +0000 UTC m=+1218.318264600" observedRunningTime="2025-11-27 17:30:31.098211904 +0000 UTC m=+1253.441038222" watchObservedRunningTime="2025-11-27 17:30:31.149250645 +0000 UTC m=+1253.492076963" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.218872 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-n2w4b"] Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.641178 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-5m4nt"] Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.642692 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.644372 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.644786 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4s87f" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.666740 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5m4nt"] Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.759826 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-combined-ca-bundle\") pod \"glance-db-sync-5m4nt\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.759886 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-db-sync-config-data\") pod \"glance-db-sync-5m4nt\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.760315 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zp4n\" (UniqueName: \"kubernetes.io/projected/b04ada6c-7744-4237-8361-9c5cccad61b3-kube-api-access-2zp4n\") pod \"glance-db-sync-5m4nt\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.760449 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-config-data\") pod \"glance-db-sync-5m4nt\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.862177 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zp4n\" (UniqueName: \"kubernetes.io/projected/b04ada6c-7744-4237-8361-9c5cccad61b3-kube-api-access-2zp4n\") pod \"glance-db-sync-5m4nt\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.862283 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-config-data\") pod \"glance-db-sync-5m4nt\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.862305 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-combined-ca-bundle\") pod \"glance-db-sync-5m4nt\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.862887 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-db-sync-config-data\") pod \"glance-db-sync-5m4nt\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.866781 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-config-data\") pod \"glance-db-sync-5m4nt\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.866789 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-db-sync-config-data\") pod \"glance-db-sync-5m4nt\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.866779 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-combined-ca-bundle\") pod \"glance-db-sync-5m4nt\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.884983 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zp4n\" (UniqueName: \"kubernetes.io/projected/b04ada6c-7744-4237-8361-9c5cccad61b3-kube-api-access-2zp4n\") pod \"glance-db-sync-5m4nt\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:31 crc kubenswrapper[4792]: I1127 17:30:31.958077 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:32 crc kubenswrapper[4792]: I1127 17:30:32.048115 4792 generic.go:334] "Generic (PLEG): container finished" podID="7c639620-d012-4fb1-851f-2316fb8c51bc" containerID="42d79b9f9cfa1094aa884431cb1249fd69009b9c5a2a21d5fada65edb159d8de" exitCode=0 Nov 27 17:30:32 crc kubenswrapper[4792]: I1127 17:30:32.048222 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" event={"ID":"7c639620-d012-4fb1-851f-2316fb8c51bc","Type":"ContainerDied","Data":"42d79b9f9cfa1094aa884431cb1249fd69009b9c5a2a21d5fada65edb159d8de"} Nov 27 17:30:32 crc kubenswrapper[4792]: I1127 17:30:32.048503 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" event={"ID":"7c639620-d012-4fb1-851f-2316fb8c51bc","Type":"ContainerStarted","Data":"64353dea212382a7770c00de5aa3c546b72c59768a36411a9ad099cc25979b4e"} Nov 27 17:30:32 crc kubenswrapper[4792]: I1127 17:30:32.679619 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5m4nt"] Nov 27 17:30:33 crc kubenswrapper[4792]: I1127 17:30:33.059232 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" event={"ID":"7c639620-d012-4fb1-851f-2316fb8c51bc","Type":"ContainerStarted","Data":"1215a5008e45dadb5a11e1ff0ebfd6e23e27ad1a88407570262534ffe95371fd"} Nov 27 17:30:33 crc kubenswrapper[4792]: I1127 17:30:33.059492 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:33 crc kubenswrapper[4792]: I1127 17:30:33.061105 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5m4nt" event={"ID":"b04ada6c-7744-4237-8361-9c5cccad61b3","Type":"ContainerStarted","Data":"b5eab511a2a510293f4d2dabcb054b042330950cd1d525ea753e37d30b09a75f"} Nov 27 17:30:33 crc kubenswrapper[4792]: I1127 17:30:33.062918 4792 generic.go:334] "Generic (PLEG): container finished" podID="4ad7f090-9b35-4a85-86d5-1763f234a768" containerID="0900f4eba72b03f63f2e3634b63d7fbce8d23dbade0b3752f2dc44458812f4e1" exitCode=0 Nov 27 17:30:33 crc kubenswrapper[4792]: I1127 17:30:33.062960 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ad7f090-9b35-4a85-86d5-1763f234a768","Type":"ContainerDied","Data":"0900f4eba72b03f63f2e3634b63d7fbce8d23dbade0b3752f2dc44458812f4e1"} Nov 27 17:30:33 crc kubenswrapper[4792]: I1127 17:30:33.089356 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" podStartSLOduration=3.089333894 podStartE2EDuration="3.089333894s" podCreationTimestamp="2025-11-27 17:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:30:33.085778196 +0000 UTC m=+1255.428604534" watchObservedRunningTime="2025-11-27 17:30:33.089333894 +0000 UTC m=+1255.432160222" Nov 27 17:30:34 crc kubenswrapper[4792]: I1127 17:30:34.081873 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ad7f090-9b35-4a85-86d5-1763f234a768","Type":"ContainerStarted","Data":"1e7c5be1387f9a6959c182f1d3b13023ef79d746d898f92e1ee1a7c8e6e7c282"} Nov 27 17:30:37 crc kubenswrapper[4792]: I1127 17:30:37.133701 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ad7f090-9b35-4a85-86d5-1763f234a768","Type":"ContainerStarted","Data":"073ad0e1b1e7920403bb8654bd28bd8645d710d23fbaf287e0024e7ca4a9f335"} Nov 27 17:30:37 crc kubenswrapper[4792]: I1127 17:30:37.134246 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4ad7f090-9b35-4a85-86d5-1763f234a768","Type":"ContainerStarted","Data":"c8422d66f86deb5b0e47f9cb91cfa60cce07582847cfc6702067aa6a7d18705e"} Nov 27 17:30:37 crc kubenswrapper[4792]: I1127 17:30:37.193074 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.193052641 podStartE2EDuration="16.193052641s" podCreationTimestamp="2025-11-27 17:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:30:37.186883117 +0000 UTC m=+1259.529709435" watchObservedRunningTime="2025-11-27 17:30:37.193052641 +0000 UTC m=+1259.535878959" Nov 27 17:30:38 crc kubenswrapper[4792]: I1127 17:30:38.289937 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:30:38 crc kubenswrapper[4792]: I1127 17:30:38.290280 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:30:40 crc kubenswrapper[4792]: I1127 17:30:40.685577 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:30:40 crc kubenswrapper[4792]: I1127 17:30:40.783754 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b64n6"] Nov 27 17:30:40 crc kubenswrapper[4792]: I1127 17:30:40.784243 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" podUID="87ce06aa-1c03-4047-b4c8-36a610c07218" containerName="dnsmasq-dns" containerID="cri-o://cda4f09799cb5135e660556608faaa5e9549a1c1cf2c3901c1ad74f191b38a9e" gracePeriod=10 Nov 27 17:30:40 crc kubenswrapper[4792]: I1127 17:30:40.807863 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" podUID="87ce06aa-1c03-4047-b4c8-36a610c07218" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: connect: connection refused" Nov 27 17:30:41 crc kubenswrapper[4792]: I1127 17:30:41.184297 4792 generic.go:334] "Generic (PLEG): container finished" podID="87ce06aa-1c03-4047-b4c8-36a610c07218" containerID="cda4f09799cb5135e660556608faaa5e9549a1c1cf2c3901c1ad74f191b38a9e" exitCode=0 Nov 27 17:30:41 crc kubenswrapper[4792]: I1127 17:30:41.184378 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" event={"ID":"87ce06aa-1c03-4047-b4c8-36a610c07218","Type":"ContainerDied","Data":"cda4f09799cb5135e660556608faaa5e9549a1c1cf2c3901c1ad74f191b38a9e"} Nov 27 17:30:42 crc kubenswrapper[4792]: I1127 17:30:42.111540 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.506258 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.681601 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-ovsdbserver-sb\") pod \"87ce06aa-1c03-4047-b4c8-36a610c07218\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.681778 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-ovsdbserver-nb\") pod \"87ce06aa-1c03-4047-b4c8-36a610c07218\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.681834 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-dns-svc\") pod \"87ce06aa-1c03-4047-b4c8-36a610c07218\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.681919 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvgz4\" (UniqueName: \"kubernetes.io/projected/87ce06aa-1c03-4047-b4c8-36a610c07218-kube-api-access-xvgz4\") pod \"87ce06aa-1c03-4047-b4c8-36a610c07218\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.682356 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-config\") pod \"87ce06aa-1c03-4047-b4c8-36a610c07218\" (UID: \"87ce06aa-1c03-4047-b4c8-36a610c07218\") " Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.687941 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87ce06aa-1c03-4047-b4c8-36a610c07218-kube-api-access-xvgz4" (OuterVolumeSpecName: "kube-api-access-xvgz4") pod "87ce06aa-1c03-4047-b4c8-36a610c07218" (UID: "87ce06aa-1c03-4047-b4c8-36a610c07218"). InnerVolumeSpecName "kube-api-access-xvgz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.745573 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-config" (OuterVolumeSpecName: "config") pod "87ce06aa-1c03-4047-b4c8-36a610c07218" (UID: "87ce06aa-1c03-4047-b4c8-36a610c07218"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.746762 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "87ce06aa-1c03-4047-b4c8-36a610c07218" (UID: "87ce06aa-1c03-4047-b4c8-36a610c07218"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.749870 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "87ce06aa-1c03-4047-b4c8-36a610c07218" (UID: "87ce06aa-1c03-4047-b4c8-36a610c07218"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.750160 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "87ce06aa-1c03-4047-b4c8-36a610c07218" (UID: "87ce06aa-1c03-4047-b4c8-36a610c07218"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.785955 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.786238 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.786367 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.786488 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvgz4\" (UniqueName: \"kubernetes.io/projected/87ce06aa-1c03-4047-b4c8-36a610c07218-kube-api-access-xvgz4\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:45 crc kubenswrapper[4792]: I1127 17:30:45.786602 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87ce06aa-1c03-4047-b4c8-36a610c07218-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:46 crc kubenswrapper[4792]: I1127 17:30:46.245497 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5m4nt" event={"ID":"b04ada6c-7744-4237-8361-9c5cccad61b3","Type":"ContainerStarted","Data":"f71a0ea8d82fcad94cf63e49123b6ac0d609379affa87069e10f839f98a04b48"} Nov 27 17:30:46 crc kubenswrapper[4792]: I1127 17:30:46.252778 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" event={"ID":"87ce06aa-1c03-4047-b4c8-36a610c07218","Type":"ContainerDied","Data":"d719f1eb2d101c499de5cc47298aa67f81e4d09b2cdde9612607898a2eaae2aa"} Nov 27 17:30:46 crc kubenswrapper[4792]: I1127 17:30:46.252849 4792 scope.go:117] "RemoveContainer" containerID="cda4f09799cb5135e660556608faaa5e9549a1c1cf2c3901c1ad74f191b38a9e" Nov 27 17:30:46 crc kubenswrapper[4792]: I1127 17:30:46.253092 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-b64n6" Nov 27 17:30:46 crc kubenswrapper[4792]: I1127 17:30:46.274341 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-5m4nt" podStartSLOduration=2.766363069 podStartE2EDuration="15.274326611s" podCreationTimestamp="2025-11-27 17:30:31 +0000 UTC" firstStartedPulling="2025-11-27 17:30:32.699539114 +0000 UTC m=+1255.042365462" lastFinishedPulling="2025-11-27 17:30:45.207502686 +0000 UTC m=+1267.550329004" observedRunningTime="2025-11-27 17:30:46.27227032 +0000 UTC m=+1268.615096638" watchObservedRunningTime="2025-11-27 17:30:46.274326611 +0000 UTC m=+1268.617152929" Nov 27 17:30:46 crc kubenswrapper[4792]: I1127 17:30:46.290913 4792 scope.go:117] "RemoveContainer" containerID="89c516bdd0966bd3ddedaa93adfaa68c3bfef62cea2038525aad1adf00e90a9b" Nov 27 17:30:46 crc kubenswrapper[4792]: I1127 17:30:46.314340 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b64n6"] Nov 27 17:30:46 crc kubenswrapper[4792]: I1127 17:30:46.323814 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-b64n6"] Nov 27 17:30:46 crc kubenswrapper[4792]: I1127 17:30:46.698711 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87ce06aa-1c03-4047-b4c8-36a610c07218" path="/var/lib/kubelet/pods/87ce06aa-1c03-4047-b4c8-36a610c07218/volumes" Nov 27 17:30:47 crc kubenswrapper[4792]: I1127 17:30:47.286883 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:30:47 crc kubenswrapper[4792]: I1127 17:30:47.624826 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.084072 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-hztnj"] Nov 27 17:30:49 crc kubenswrapper[4792]: E1127 17:30:49.085096 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ce06aa-1c03-4047-b4c8-36a610c07218" containerName="dnsmasq-dns" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.085117 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ce06aa-1c03-4047-b4c8-36a610c07218" containerName="dnsmasq-dns" Nov 27 17:30:49 crc kubenswrapper[4792]: E1127 17:30:49.085155 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ce06aa-1c03-4047-b4c8-36a610c07218" containerName="init" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.085163 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ce06aa-1c03-4047-b4c8-36a610c07218" containerName="init" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.085410 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="87ce06aa-1c03-4047-b4c8-36a610c07218" containerName="dnsmasq-dns" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.086397 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hztnj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.117813 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-hztnj"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.280238 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3-operator-scripts\") pod \"heat-db-create-hztnj\" (UID: \"8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3\") " pod="openstack/heat-db-create-hztnj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.280581 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmk4h\" (UniqueName: \"kubernetes.io/projected/8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3-kube-api-access-hmk4h\") pod \"heat-db-create-hztnj\" (UID: \"8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3\") " pod="openstack/heat-db-create-hztnj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.321356 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-e32d-account-create-update-cnfgl"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.324575 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e32d-account-create-update-cnfgl" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.328359 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.340739 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e32d-account-create-update-cnfgl"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.381895 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3-operator-scripts\") pod \"heat-db-create-hztnj\" (UID: \"8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3\") " pod="openstack/heat-db-create-hztnj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.382003 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmk4h\" (UniqueName: \"kubernetes.io/projected/8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3-kube-api-access-hmk4h\") pod \"heat-db-create-hztnj\" (UID: \"8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3\") " pod="openstack/heat-db-create-hztnj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.383001 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3-operator-scripts\") pod \"heat-db-create-hztnj\" (UID: \"8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3\") " pod="openstack/heat-db-create-hztnj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.415258 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-8wxff"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.416657 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8wxff" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.438078 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8wxff"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.442621 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmk4h\" (UniqueName: \"kubernetes.io/projected/8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3-kube-api-access-hmk4h\") pod \"heat-db-create-hztnj\" (UID: \"8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3\") " pod="openstack/heat-db-create-hztnj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.477093 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-c035-account-create-update-tfx4j"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.478510 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c035-account-create-update-tfx4j" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.480822 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.483362 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fc1c911-18ed-47e3-9028-e3accc0567fe-operator-scripts\") pod \"barbican-e32d-account-create-update-cnfgl\" (UID: \"6fc1c911-18ed-47e3-9028-e3accc0567fe\") " pod="openstack/barbican-e32d-account-create-update-cnfgl" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.483432 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfddd\" (UniqueName: \"kubernetes.io/projected/6fc1c911-18ed-47e3-9028-e3accc0567fe-kube-api-access-hfddd\") pod \"barbican-e32d-account-create-update-cnfgl\" (UID: \"6fc1c911-18ed-47e3-9028-e3accc0567fe\") " pod="openstack/barbican-e32d-account-create-update-cnfgl" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.483465 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8rgj\" (UniqueName: \"kubernetes.io/projected/baa1189f-a300-44d7-9a8b-008bc13f13f7-kube-api-access-p8rgj\") pod \"cinder-db-create-8wxff\" (UID: \"baa1189f-a300-44d7-9a8b-008bc13f13f7\") " pod="openstack/cinder-db-create-8wxff" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.483728 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baa1189f-a300-44d7-9a8b-008bc13f13f7-operator-scripts\") pod \"cinder-db-create-8wxff\" (UID: \"baa1189f-a300-44d7-9a8b-008bc13f13f7\") " pod="openstack/cinder-db-create-8wxff" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.499091 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-c035-account-create-update-tfx4j"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.536773 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-pdqxz"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.538145 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pdqxz" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.543266 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.543565 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.543695 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p6xs8" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.543803 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.545750 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-545fj"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.547159 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-545fj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.569372 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pdqxz"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.582799 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-545fj"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.585374 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baa1189f-a300-44d7-9a8b-008bc13f13f7-operator-scripts\") pod \"cinder-db-create-8wxff\" (UID: \"baa1189f-a300-44d7-9a8b-008bc13f13f7\") " pod="openstack/cinder-db-create-8wxff" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.585447 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fc1c911-18ed-47e3-9028-e3accc0567fe-operator-scripts\") pod \"barbican-e32d-account-create-update-cnfgl\" (UID: \"6fc1c911-18ed-47e3-9028-e3accc0567fe\") " pod="openstack/barbican-e32d-account-create-update-cnfgl" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.585487 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65732d4f-b7b6-44fd-a200-22f9a70de277-operator-scripts\") pod \"heat-c035-account-create-update-tfx4j\" (UID: \"65732d4f-b7b6-44fd-a200-22f9a70de277\") " pod="openstack/heat-c035-account-create-update-tfx4j" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.585530 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfddd\" (UniqueName: \"kubernetes.io/projected/6fc1c911-18ed-47e3-9028-e3accc0567fe-kube-api-access-hfddd\") pod \"barbican-e32d-account-create-update-cnfgl\" (UID: \"6fc1c911-18ed-47e3-9028-e3accc0567fe\") " pod="openstack/barbican-e32d-account-create-update-cnfgl" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.585548 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mq89\" (UniqueName: \"kubernetes.io/projected/65732d4f-b7b6-44fd-a200-22f9a70de277-kube-api-access-7mq89\") pod \"heat-c035-account-create-update-tfx4j\" (UID: \"65732d4f-b7b6-44fd-a200-22f9a70de277\") " pod="openstack/heat-c035-account-create-update-tfx4j" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.585576 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8rgj\" (UniqueName: \"kubernetes.io/projected/baa1189f-a300-44d7-9a8b-008bc13f13f7-kube-api-access-p8rgj\") pod \"cinder-db-create-8wxff\" (UID: \"baa1189f-a300-44d7-9a8b-008bc13f13f7\") " pod="openstack/cinder-db-create-8wxff" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.586461 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baa1189f-a300-44d7-9a8b-008bc13f13f7-operator-scripts\") pod \"cinder-db-create-8wxff\" (UID: \"baa1189f-a300-44d7-9a8b-008bc13f13f7\") " pod="openstack/cinder-db-create-8wxff" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.586626 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fc1c911-18ed-47e3-9028-e3accc0567fe-operator-scripts\") pod \"barbican-e32d-account-create-update-cnfgl\" (UID: \"6fc1c911-18ed-47e3-9028-e3accc0567fe\") " pod="openstack/barbican-e32d-account-create-update-cnfgl" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.594304 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0b23-account-create-update-q8c7m"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.595695 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0b23-account-create-update-q8c7m" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.598743 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.603494 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8rgj\" (UniqueName: \"kubernetes.io/projected/baa1189f-a300-44d7-9a8b-008bc13f13f7-kube-api-access-p8rgj\") pod \"cinder-db-create-8wxff\" (UID: \"baa1189f-a300-44d7-9a8b-008bc13f13f7\") " pod="openstack/cinder-db-create-8wxff" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.613776 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfddd\" (UniqueName: \"kubernetes.io/projected/6fc1c911-18ed-47e3-9028-e3accc0567fe-kube-api-access-hfddd\") pod \"barbican-e32d-account-create-update-cnfgl\" (UID: \"6fc1c911-18ed-47e3-9028-e3accc0567fe\") " pod="openstack/barbican-e32d-account-create-update-cnfgl" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.613838 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0b23-account-create-update-q8c7m"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.664269 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e32d-account-create-update-cnfgl" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.676963 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-cg4t6"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.678589 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cg4t6" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.688295 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-e836-account-create-update-tshc7"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.689689 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e836-account-create-update-tshc7" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.693703 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp6s9\" (UniqueName: \"kubernetes.io/projected/ba88d25c-da6d-421d-bfe5-cee0ea2d9b96-kube-api-access-bp6s9\") pod \"barbican-db-create-545fj\" (UID: \"ba88d25c-da6d-421d-bfe5-cee0ea2d9b96\") " pod="openstack/barbican-db-create-545fj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.693747 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n2hz\" (UniqueName: \"kubernetes.io/projected/fb873a37-184a-4086-b6c6-3164fa76cbce-kube-api-access-8n2hz\") pod \"keystone-db-sync-pdqxz\" (UID: \"fb873a37-184a-4086-b6c6-3164fa76cbce\") " pod="openstack/keystone-db-sync-pdqxz" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.693783 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb873a37-184a-4086-b6c6-3164fa76cbce-config-data\") pod \"keystone-db-sync-pdqxz\" (UID: \"fb873a37-184a-4086-b6c6-3164fa76cbce\") " pod="openstack/keystone-db-sync-pdqxz" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.693824 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ced818ea-81fd-421d-b134-308b13cc2076-operator-scripts\") pod \"cinder-0b23-account-create-update-q8c7m\" (UID: \"ced818ea-81fd-421d-b134-308b13cc2076\") " pod="openstack/cinder-0b23-account-create-update-q8c7m" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.693873 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba88d25c-da6d-421d-bfe5-cee0ea2d9b96-operator-scripts\") pod \"barbican-db-create-545fj\" (UID: \"ba88d25c-da6d-421d-bfe5-cee0ea2d9b96\") " pod="openstack/barbican-db-create-545fj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.693996 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65732d4f-b7b6-44fd-a200-22f9a70de277-operator-scripts\") pod \"heat-c035-account-create-update-tfx4j\" (UID: \"65732d4f-b7b6-44fd-a200-22f9a70de277\") " pod="openstack/heat-c035-account-create-update-tfx4j" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.694035 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb873a37-184a-4086-b6c6-3164fa76cbce-combined-ca-bundle\") pod \"keystone-db-sync-pdqxz\" (UID: \"fb873a37-184a-4086-b6c6-3164fa76cbce\") " pod="openstack/keystone-db-sync-pdqxz" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.694086 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mq89\" (UniqueName: \"kubernetes.io/projected/65732d4f-b7b6-44fd-a200-22f9a70de277-kube-api-access-7mq89\") pod \"heat-c035-account-create-update-tfx4j\" (UID: \"65732d4f-b7b6-44fd-a200-22f9a70de277\") " pod="openstack/heat-c035-account-create-update-tfx4j" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.694114 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4qzw\" (UniqueName: \"kubernetes.io/projected/ced818ea-81fd-421d-b134-308b13cc2076-kube-api-access-j4qzw\") pod \"cinder-0b23-account-create-update-q8c7m\" (UID: \"ced818ea-81fd-421d-b134-308b13cc2076\") " pod="openstack/cinder-0b23-account-create-update-q8c7m" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.694547 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.694950 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65732d4f-b7b6-44fd-a200-22f9a70de277-operator-scripts\") pod \"heat-c035-account-create-update-tfx4j\" (UID: \"65732d4f-b7b6-44fd-a200-22f9a70de277\") " pod="openstack/heat-c035-account-create-update-tfx4j" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.723183 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e836-account-create-update-tshc7"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.723836 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hztnj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.727488 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mq89\" (UniqueName: \"kubernetes.io/projected/65732d4f-b7b6-44fd-a200-22f9a70de277-kube-api-access-7mq89\") pod \"heat-c035-account-create-update-tfx4j\" (UID: \"65732d4f-b7b6-44fd-a200-22f9a70de277\") " pod="openstack/heat-c035-account-create-update-tfx4j" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.739788 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cg4t6"] Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.796148 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb873a37-184a-4086-b6c6-3164fa76cbce-config-data\") pod \"keystone-db-sync-pdqxz\" (UID: \"fb873a37-184a-4086-b6c6-3164fa76cbce\") " pod="openstack/keystone-db-sync-pdqxz" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.796197 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9lt8\" (UniqueName: \"kubernetes.io/projected/79e13bff-170f-4963-856b-406322ac0ab4-kube-api-access-q9lt8\") pod \"neutron-db-create-cg4t6\" (UID: \"79e13bff-170f-4963-856b-406322ac0ab4\") " pod="openstack/neutron-db-create-cg4t6" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.796224 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3492cbfb-1b2a-4a4b-bad4-d577328a4fcd-operator-scripts\") pod \"neutron-e836-account-create-update-tshc7\" (UID: \"3492cbfb-1b2a-4a4b-bad4-d577328a4fcd\") " pod="openstack/neutron-e836-account-create-update-tshc7" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.796242 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpbtk\" (UniqueName: \"kubernetes.io/projected/3492cbfb-1b2a-4a4b-bad4-d577328a4fcd-kube-api-access-xpbtk\") pod \"neutron-e836-account-create-update-tshc7\" (UID: \"3492cbfb-1b2a-4a4b-bad4-d577328a4fcd\") " pod="openstack/neutron-e836-account-create-update-tshc7" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.796295 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ced818ea-81fd-421d-b134-308b13cc2076-operator-scripts\") pod \"cinder-0b23-account-create-update-q8c7m\" (UID: \"ced818ea-81fd-421d-b134-308b13cc2076\") " pod="openstack/cinder-0b23-account-create-update-q8c7m" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.796454 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba88d25c-da6d-421d-bfe5-cee0ea2d9b96-operator-scripts\") pod \"barbican-db-create-545fj\" (UID: \"ba88d25c-da6d-421d-bfe5-cee0ea2d9b96\") " pod="openstack/barbican-db-create-545fj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.796676 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79e13bff-170f-4963-856b-406322ac0ab4-operator-scripts\") pod \"neutron-db-create-cg4t6\" (UID: \"79e13bff-170f-4963-856b-406322ac0ab4\") " pod="openstack/neutron-db-create-cg4t6" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.796719 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb873a37-184a-4086-b6c6-3164fa76cbce-combined-ca-bundle\") pod \"keystone-db-sync-pdqxz\" (UID: \"fb873a37-184a-4086-b6c6-3164fa76cbce\") " pod="openstack/keystone-db-sync-pdqxz" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.796798 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4qzw\" (UniqueName: \"kubernetes.io/projected/ced818ea-81fd-421d-b134-308b13cc2076-kube-api-access-j4qzw\") pod \"cinder-0b23-account-create-update-q8c7m\" (UID: \"ced818ea-81fd-421d-b134-308b13cc2076\") " pod="openstack/cinder-0b23-account-create-update-q8c7m" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.796874 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp6s9\" (UniqueName: \"kubernetes.io/projected/ba88d25c-da6d-421d-bfe5-cee0ea2d9b96-kube-api-access-bp6s9\") pod \"barbican-db-create-545fj\" (UID: \"ba88d25c-da6d-421d-bfe5-cee0ea2d9b96\") " pod="openstack/barbican-db-create-545fj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.796901 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n2hz\" (UniqueName: \"kubernetes.io/projected/fb873a37-184a-4086-b6c6-3164fa76cbce-kube-api-access-8n2hz\") pod \"keystone-db-sync-pdqxz\" (UID: \"fb873a37-184a-4086-b6c6-3164fa76cbce\") " pod="openstack/keystone-db-sync-pdqxz" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.798439 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ced818ea-81fd-421d-b134-308b13cc2076-operator-scripts\") pod \"cinder-0b23-account-create-update-q8c7m\" (UID: \"ced818ea-81fd-421d-b134-308b13cc2076\") " pod="openstack/cinder-0b23-account-create-update-q8c7m" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.801215 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb873a37-184a-4086-b6c6-3164fa76cbce-config-data\") pod \"keystone-db-sync-pdqxz\" (UID: \"fb873a37-184a-4086-b6c6-3164fa76cbce\") " pod="openstack/keystone-db-sync-pdqxz" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.801594 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba88d25c-da6d-421d-bfe5-cee0ea2d9b96-operator-scripts\") pod \"barbican-db-create-545fj\" (UID: \"ba88d25c-da6d-421d-bfe5-cee0ea2d9b96\") " pod="openstack/barbican-db-create-545fj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.810678 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8wxff" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.813554 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb873a37-184a-4086-b6c6-3164fa76cbce-combined-ca-bundle\") pod \"keystone-db-sync-pdqxz\" (UID: \"fb873a37-184a-4086-b6c6-3164fa76cbce\") " pod="openstack/keystone-db-sync-pdqxz" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.816726 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4qzw\" (UniqueName: \"kubernetes.io/projected/ced818ea-81fd-421d-b134-308b13cc2076-kube-api-access-j4qzw\") pod \"cinder-0b23-account-create-update-q8c7m\" (UID: \"ced818ea-81fd-421d-b134-308b13cc2076\") " pod="openstack/cinder-0b23-account-create-update-q8c7m" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.817467 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n2hz\" (UniqueName: \"kubernetes.io/projected/fb873a37-184a-4086-b6c6-3164fa76cbce-kube-api-access-8n2hz\") pod \"keystone-db-sync-pdqxz\" (UID: \"fb873a37-184a-4086-b6c6-3164fa76cbce\") " pod="openstack/keystone-db-sync-pdqxz" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.817925 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c035-account-create-update-tfx4j" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.824036 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp6s9\" (UniqueName: \"kubernetes.io/projected/ba88d25c-da6d-421d-bfe5-cee0ea2d9b96-kube-api-access-bp6s9\") pod \"barbican-db-create-545fj\" (UID: \"ba88d25c-da6d-421d-bfe5-cee0ea2d9b96\") " pod="openstack/barbican-db-create-545fj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.870771 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pdqxz" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.882016 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-545fj" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.899371 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3492cbfb-1b2a-4a4b-bad4-d577328a4fcd-operator-scripts\") pod \"neutron-e836-account-create-update-tshc7\" (UID: \"3492cbfb-1b2a-4a4b-bad4-d577328a4fcd\") " pod="openstack/neutron-e836-account-create-update-tshc7" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.899415 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpbtk\" (UniqueName: \"kubernetes.io/projected/3492cbfb-1b2a-4a4b-bad4-d577328a4fcd-kube-api-access-xpbtk\") pod \"neutron-e836-account-create-update-tshc7\" (UID: \"3492cbfb-1b2a-4a4b-bad4-d577328a4fcd\") " pod="openstack/neutron-e836-account-create-update-tshc7" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.899438 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9lt8\" (UniqueName: \"kubernetes.io/projected/79e13bff-170f-4963-856b-406322ac0ab4-kube-api-access-q9lt8\") pod \"neutron-db-create-cg4t6\" (UID: \"79e13bff-170f-4963-856b-406322ac0ab4\") " pod="openstack/neutron-db-create-cg4t6" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.899552 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79e13bff-170f-4963-856b-406322ac0ab4-operator-scripts\") pod \"neutron-db-create-cg4t6\" (UID: \"79e13bff-170f-4963-856b-406322ac0ab4\") " pod="openstack/neutron-db-create-cg4t6" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.901345 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3492cbfb-1b2a-4a4b-bad4-d577328a4fcd-operator-scripts\") pod \"neutron-e836-account-create-update-tshc7\" (UID: \"3492cbfb-1b2a-4a4b-bad4-d577328a4fcd\") " pod="openstack/neutron-e836-account-create-update-tshc7" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.901821 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79e13bff-170f-4963-856b-406322ac0ab4-operator-scripts\") pod \"neutron-db-create-cg4t6\" (UID: \"79e13bff-170f-4963-856b-406322ac0ab4\") " pod="openstack/neutron-db-create-cg4t6" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.927430 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9lt8\" (UniqueName: \"kubernetes.io/projected/79e13bff-170f-4963-856b-406322ac0ab4-kube-api-access-q9lt8\") pod \"neutron-db-create-cg4t6\" (UID: \"79e13bff-170f-4963-856b-406322ac0ab4\") " pod="openstack/neutron-db-create-cg4t6" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.929844 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpbtk\" (UniqueName: \"kubernetes.io/projected/3492cbfb-1b2a-4a4b-bad4-d577328a4fcd-kube-api-access-xpbtk\") pod \"neutron-e836-account-create-update-tshc7\" (UID: \"3492cbfb-1b2a-4a4b-bad4-d577328a4fcd\") " pod="openstack/neutron-e836-account-create-update-tshc7" Nov 27 17:30:49 crc kubenswrapper[4792]: I1127 17:30:49.973999 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0b23-account-create-update-q8c7m" Nov 27 17:30:50 crc kubenswrapper[4792]: I1127 17:30:50.025314 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cg4t6" Nov 27 17:30:50 crc kubenswrapper[4792]: I1127 17:30:50.180606 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e836-account-create-update-tshc7" Nov 27 17:30:50 crc kubenswrapper[4792]: I1127 17:30:50.406878 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e32d-account-create-update-cnfgl"] Nov 27 17:30:50 crc kubenswrapper[4792]: I1127 17:30:50.436164 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8wxff"] Nov 27 17:30:50 crc kubenswrapper[4792]: W1127 17:30:50.645140 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b243cfb_4dc0_4b6f_b0bb_f4fe72c8b8c3.slice/crio-e2085455b1051bb8ae802f43b260f5f6076d155a1add6677edb86b63a96f9db8 WatchSource:0}: Error finding container e2085455b1051bb8ae802f43b260f5f6076d155a1add6677edb86b63a96f9db8: Status 404 returned error can't find the container with id e2085455b1051bb8ae802f43b260f5f6076d155a1add6677edb86b63a96f9db8 Nov 27 17:30:50 crc kubenswrapper[4792]: I1127 17:30:50.679431 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-hztnj"] Nov 27 17:30:50 crc kubenswrapper[4792]: I1127 17:30:50.829978 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-c035-account-create-update-tfx4j"] Nov 27 17:30:50 crc kubenswrapper[4792]: W1127 17:30:50.835140 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65732d4f_b7b6_44fd_a200_22f9a70de277.slice/crio-43ea411a8bdf84751721c482889488eba094c28796d9c141aa4fc4c18a95b033 WatchSource:0}: Error finding container 43ea411a8bdf84751721c482889488eba094c28796d9c141aa4fc4c18a95b033: Status 404 returned error can't find the container with id 43ea411a8bdf84751721c482889488eba094c28796d9c141aa4fc4c18a95b033 Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.133699 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pdqxz"] Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.140290 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0b23-account-create-update-q8c7m"] Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.171028 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-545fj"] Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.220825 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e836-account-create-update-tshc7"] Nov 27 17:30:51 crc kubenswrapper[4792]: W1127 17:30:51.228910 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba88d25c_da6d_421d_bfe5_cee0ea2d9b96.slice/crio-e3986b7e3bc53d9f299f289a35f347315689cb9e759513aec310d20232123770 WatchSource:0}: Error finding container e3986b7e3bc53d9f299f289a35f347315689cb9e759513aec310d20232123770: Status 404 returned error can't find the container with id e3986b7e3bc53d9f299f289a35f347315689cb9e759513aec310d20232123770 Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.236208 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cg4t6"] Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.380726 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8wxff" event={"ID":"baa1189f-a300-44d7-9a8b-008bc13f13f7","Type":"ContainerStarted","Data":"f8d2b862fda9f57e944f5d80c95cb15e591970ebca80074e98bea07822a6f2b4"} Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.380773 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8wxff" event={"ID":"baa1189f-a300-44d7-9a8b-008bc13f13f7","Type":"ContainerStarted","Data":"2e75d67e7d2cc13fb1a4abf242ce873b596c395965188d4560280ca04151f376"} Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.384182 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pdqxz" event={"ID":"fb873a37-184a-4086-b6c6-3164fa76cbce","Type":"ContainerStarted","Data":"4bb4db41dac1d6ccb2436a13719e3fc3fcb48b9a9783ae16938bee03206947ef"} Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.400414 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0b23-account-create-update-q8c7m" event={"ID":"ced818ea-81fd-421d-b134-308b13cc2076","Type":"ContainerStarted","Data":"b9636510134e799d5e6e21a0ce7dd6e8ad8dd2fed15b3a650a626e784e53f4ed"} Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.412628 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-hztnj" event={"ID":"8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3","Type":"ContainerStarted","Data":"0cb898e66a9f1f469f7f18418ed7b97aa43788a9742d2a60ea0870c786fd62db"} Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.412686 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-hztnj" event={"ID":"8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3","Type":"ContainerStarted","Data":"e2085455b1051bb8ae802f43b260f5f6076d155a1add6677edb86b63a96f9db8"} Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.416210 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e836-account-create-update-tshc7" event={"ID":"3492cbfb-1b2a-4a4b-bad4-d577328a4fcd","Type":"ContainerStarted","Data":"8932e06bbfe01d54ebd5d46782f4dd2bac90cee71318a5d2ea16310425eb7ee9"} Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.419291 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c035-account-create-update-tfx4j" event={"ID":"65732d4f-b7b6-44fd-a200-22f9a70de277","Type":"ContainerStarted","Data":"ad81c6af957b70eb152622420787e4803bbdb61cf76332303d9885cd96318311"} Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.419332 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c035-account-create-update-tfx4j" event={"ID":"65732d4f-b7b6-44fd-a200-22f9a70de277","Type":"ContainerStarted","Data":"43ea411a8bdf84751721c482889488eba094c28796d9c141aa4fc4c18a95b033"} Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.434746 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-hztnj" podStartSLOduration=2.434728181 podStartE2EDuration="2.434728181s" podCreationTimestamp="2025-11-27 17:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:30:51.431683825 +0000 UTC m=+1273.774510153" watchObservedRunningTime="2025-11-27 17:30:51.434728181 +0000 UTC m=+1273.777554499" Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.442133 4792 generic.go:334] "Generic (PLEG): container finished" podID="6fc1c911-18ed-47e3-9028-e3accc0567fe" containerID="db15b93b93b449a99385ceaf30549758c8a99ce857c94da356a8f1ea058b6e07" exitCode=0 Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.442196 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e32d-account-create-update-cnfgl" event={"ID":"6fc1c911-18ed-47e3-9028-e3accc0567fe","Type":"ContainerDied","Data":"db15b93b93b449a99385ceaf30549758c8a99ce857c94da356a8f1ea058b6e07"} Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.442217 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e32d-account-create-update-cnfgl" event={"ID":"6fc1c911-18ed-47e3-9028-e3accc0567fe","Type":"ContainerStarted","Data":"4305783a1e9f1c2dd5d45ca3d98b3c4303662b5ffadcb9a1bc03828dcc5a9272"} Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.452586 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-545fj" event={"ID":"ba88d25c-da6d-421d-bfe5-cee0ea2d9b96","Type":"ContainerStarted","Data":"e3986b7e3bc53d9f299f289a35f347315689cb9e759513aec310d20232123770"} Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.462336 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cg4t6" event={"ID":"79e13bff-170f-4963-856b-406322ac0ab4","Type":"ContainerStarted","Data":"b640ed9e0cdeae5ebb1d2c2b0797d5df67b332fa78d1b096a6141e61b050a6c5"} Nov 27 17:30:51 crc kubenswrapper[4792]: I1127 17:30:51.487127 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-c035-account-create-update-tfx4j" podStartSLOduration=2.487113026 podStartE2EDuration="2.487113026s" podCreationTimestamp="2025-11-27 17:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:30:51.465459166 +0000 UTC m=+1273.808285514" watchObservedRunningTime="2025-11-27 17:30:51.487113026 +0000 UTC m=+1273.829939344" Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.104614 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.110600 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.475968 4792 generic.go:334] "Generic (PLEG): container finished" podID="79e13bff-170f-4963-856b-406322ac0ab4" containerID="882f58d03fab635ede16b549ccc6f16e4e525f8e15d5647cda2132f051cac36e" exitCode=0 Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.476338 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cg4t6" event={"ID":"79e13bff-170f-4963-856b-406322ac0ab4","Type":"ContainerDied","Data":"882f58d03fab635ede16b549ccc6f16e4e525f8e15d5647cda2132f051cac36e"} Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.478927 4792 generic.go:334] "Generic (PLEG): container finished" podID="baa1189f-a300-44d7-9a8b-008bc13f13f7" containerID="f8d2b862fda9f57e944f5d80c95cb15e591970ebca80074e98bea07822a6f2b4" exitCode=0 Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.478972 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8wxff" event={"ID":"baa1189f-a300-44d7-9a8b-008bc13f13f7","Type":"ContainerDied","Data":"f8d2b862fda9f57e944f5d80c95cb15e591970ebca80074e98bea07822a6f2b4"} Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.483707 4792 generic.go:334] "Generic (PLEG): container finished" podID="8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3" containerID="0cb898e66a9f1f469f7f18418ed7b97aa43788a9742d2a60ea0870c786fd62db" exitCode=0 Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.483872 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-hztnj" event={"ID":"8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3","Type":"ContainerDied","Data":"0cb898e66a9f1f469f7f18418ed7b97aa43788a9742d2a60ea0870c786fd62db"} Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.486923 4792 generic.go:334] "Generic (PLEG): container finished" podID="ced818ea-81fd-421d-b134-308b13cc2076" containerID="388c2e2bc52bf1525ecaeff0f766459b888ab06136a2db1634c0f40fddd3523e" exitCode=0 Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.487029 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0b23-account-create-update-q8c7m" event={"ID":"ced818ea-81fd-421d-b134-308b13cc2076","Type":"ContainerDied","Data":"388c2e2bc52bf1525ecaeff0f766459b888ab06136a2db1634c0f40fddd3523e"} Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.490930 4792 generic.go:334] "Generic (PLEG): container finished" podID="3492cbfb-1b2a-4a4b-bad4-d577328a4fcd" containerID="052abeaf02e2566f5fd89cf551bd2e18f28e5418e90fe261a84bf97c763717d3" exitCode=0 Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.491031 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e836-account-create-update-tshc7" event={"ID":"3492cbfb-1b2a-4a4b-bad4-d577328a4fcd","Type":"ContainerDied","Data":"052abeaf02e2566f5fd89cf551bd2e18f28e5418e90fe261a84bf97c763717d3"} Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.496083 4792 generic.go:334] "Generic (PLEG): container finished" podID="65732d4f-b7b6-44fd-a200-22f9a70de277" containerID="ad81c6af957b70eb152622420787e4803bbdb61cf76332303d9885cd96318311" exitCode=0 Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.496171 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c035-account-create-update-tfx4j" event={"ID":"65732d4f-b7b6-44fd-a200-22f9a70de277","Type":"ContainerDied","Data":"ad81c6af957b70eb152622420787e4803bbdb61cf76332303d9885cd96318311"} Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.497752 4792 generic.go:334] "Generic (PLEG): container finished" podID="ba88d25c-da6d-421d-bfe5-cee0ea2d9b96" containerID="74553bd5855243d6478552fb25f6d54df05f0c1556a0ae51f5ba8914e093aee1" exitCode=0 Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.498510 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-545fj" event={"ID":"ba88d25c-da6d-421d-bfe5-cee0ea2d9b96","Type":"ContainerDied","Data":"74553bd5855243d6478552fb25f6d54df05f0c1556a0ae51f5ba8914e093aee1"} Nov 27 17:30:52 crc kubenswrapper[4792]: I1127 17:30:52.503543 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.073329 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8wxff" Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.081217 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e32d-account-create-update-cnfgl" Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.213571 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fc1c911-18ed-47e3-9028-e3accc0567fe-operator-scripts\") pod \"6fc1c911-18ed-47e3-9028-e3accc0567fe\" (UID: \"6fc1c911-18ed-47e3-9028-e3accc0567fe\") " Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.214252 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fc1c911-18ed-47e3-9028-e3accc0567fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fc1c911-18ed-47e3-9028-e3accc0567fe" (UID: "6fc1c911-18ed-47e3-9028-e3accc0567fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.214367 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfddd\" (UniqueName: \"kubernetes.io/projected/6fc1c911-18ed-47e3-9028-e3accc0567fe-kube-api-access-hfddd\") pod \"6fc1c911-18ed-47e3-9028-e3accc0567fe\" (UID: \"6fc1c911-18ed-47e3-9028-e3accc0567fe\") " Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.214523 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8rgj\" (UniqueName: \"kubernetes.io/projected/baa1189f-a300-44d7-9a8b-008bc13f13f7-kube-api-access-p8rgj\") pod \"baa1189f-a300-44d7-9a8b-008bc13f13f7\" (UID: \"baa1189f-a300-44d7-9a8b-008bc13f13f7\") " Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.214595 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baa1189f-a300-44d7-9a8b-008bc13f13f7-operator-scripts\") pod \"baa1189f-a300-44d7-9a8b-008bc13f13f7\" (UID: \"baa1189f-a300-44d7-9a8b-008bc13f13f7\") " Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.215026 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baa1189f-a300-44d7-9a8b-008bc13f13f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "baa1189f-a300-44d7-9a8b-008bc13f13f7" (UID: "baa1189f-a300-44d7-9a8b-008bc13f13f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.215520 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baa1189f-a300-44d7-9a8b-008bc13f13f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.215544 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fc1c911-18ed-47e3-9028-e3accc0567fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.232846 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fc1c911-18ed-47e3-9028-e3accc0567fe-kube-api-access-hfddd" (OuterVolumeSpecName: "kube-api-access-hfddd") pod "6fc1c911-18ed-47e3-9028-e3accc0567fe" (UID: "6fc1c911-18ed-47e3-9028-e3accc0567fe"). InnerVolumeSpecName "kube-api-access-hfddd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.232907 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa1189f-a300-44d7-9a8b-008bc13f13f7-kube-api-access-p8rgj" (OuterVolumeSpecName: "kube-api-access-p8rgj") pod "baa1189f-a300-44d7-9a8b-008bc13f13f7" (UID: "baa1189f-a300-44d7-9a8b-008bc13f13f7"). InnerVolumeSpecName "kube-api-access-p8rgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.317478 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfddd\" (UniqueName: \"kubernetes.io/projected/6fc1c911-18ed-47e3-9028-e3accc0567fe-kube-api-access-hfddd\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.317523 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8rgj\" (UniqueName: \"kubernetes.io/projected/baa1189f-a300-44d7-9a8b-008bc13f13f7-kube-api-access-p8rgj\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.510523 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e32d-account-create-update-cnfgl" event={"ID":"6fc1c911-18ed-47e3-9028-e3accc0567fe","Type":"ContainerDied","Data":"4305783a1e9f1c2dd5d45ca3d98b3c4303662b5ffadcb9a1bc03828dcc5a9272"} Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.510565 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4305783a1e9f1c2dd5d45ca3d98b3c4303662b5ffadcb9a1bc03828dcc5a9272" Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.510539 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e32d-account-create-update-cnfgl" Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.512137 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8wxff" event={"ID":"baa1189f-a300-44d7-9a8b-008bc13f13f7","Type":"ContainerDied","Data":"2e75d67e7d2cc13fb1a4abf242ce873b596c395965188d4560280ca04151f376"} Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.512174 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e75d67e7d2cc13fb1a4abf242ce873b596c395965188d4560280ca04151f376" Nov 27 17:30:53 crc kubenswrapper[4792]: I1127 17:30:53.512140 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8wxff" Nov 27 17:30:54 crc kubenswrapper[4792]: I1127 17:30:54.524395 4792 generic.go:334] "Generic (PLEG): container finished" podID="b04ada6c-7744-4237-8361-9c5cccad61b3" containerID="f71a0ea8d82fcad94cf63e49123b6ac0d609379affa87069e10f839f98a04b48" exitCode=0 Nov 27 17:30:54 crc kubenswrapper[4792]: I1127 17:30:54.524634 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5m4nt" event={"ID":"b04ada6c-7744-4237-8361-9c5cccad61b3","Type":"ContainerDied","Data":"f71a0ea8d82fcad94cf63e49123b6ac0d609379affa87069e10f839f98a04b48"} Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.010508 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0b23-account-create-update-q8c7m" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.012054 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hztnj" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.013246 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-545fj" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.018482 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e836-account-create-update-tshc7" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.027991 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cg4t6" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.037948 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.114973 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-config-data\") pod \"b04ada6c-7744-4237-8361-9c5cccad61b3\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.115037 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba88d25c-da6d-421d-bfe5-cee0ea2d9b96-operator-scripts\") pod \"ba88d25c-da6d-421d-bfe5-cee0ea2d9b96\" (UID: \"ba88d25c-da6d-421d-bfe5-cee0ea2d9b96\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.115064 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9lt8\" (UniqueName: \"kubernetes.io/projected/79e13bff-170f-4963-856b-406322ac0ab4-kube-api-access-q9lt8\") pod \"79e13bff-170f-4963-856b-406322ac0ab4\" (UID: \"79e13bff-170f-4963-856b-406322ac0ab4\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.115105 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmk4h\" (UniqueName: \"kubernetes.io/projected/8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3-kube-api-access-hmk4h\") pod \"8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3\" (UID: \"8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.115136 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-db-sync-config-data\") pod \"b04ada6c-7744-4237-8361-9c5cccad61b3\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.115186 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zp4n\" (UniqueName: \"kubernetes.io/projected/b04ada6c-7744-4237-8361-9c5cccad61b3-kube-api-access-2zp4n\") pod \"b04ada6c-7744-4237-8361-9c5cccad61b3\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.115233 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ced818ea-81fd-421d-b134-308b13cc2076-operator-scripts\") pod \"ced818ea-81fd-421d-b134-308b13cc2076\" (UID: \"ced818ea-81fd-421d-b134-308b13cc2076\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.115256 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4qzw\" (UniqueName: \"kubernetes.io/projected/ced818ea-81fd-421d-b134-308b13cc2076-kube-api-access-j4qzw\") pod \"ced818ea-81fd-421d-b134-308b13cc2076\" (UID: \"ced818ea-81fd-421d-b134-308b13cc2076\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.115311 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-combined-ca-bundle\") pod \"b04ada6c-7744-4237-8361-9c5cccad61b3\" (UID: \"b04ada6c-7744-4237-8361-9c5cccad61b3\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.115346 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp6s9\" (UniqueName: \"kubernetes.io/projected/ba88d25c-da6d-421d-bfe5-cee0ea2d9b96-kube-api-access-bp6s9\") pod \"ba88d25c-da6d-421d-bfe5-cee0ea2d9b96\" (UID: \"ba88d25c-da6d-421d-bfe5-cee0ea2d9b96\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.115364 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79e13bff-170f-4963-856b-406322ac0ab4-operator-scripts\") pod \"79e13bff-170f-4963-856b-406322ac0ab4\" (UID: \"79e13bff-170f-4963-856b-406322ac0ab4\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.115388 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3-operator-scripts\") pod \"8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3\" (UID: \"8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.115409 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3492cbfb-1b2a-4a4b-bad4-d577328a4fcd-operator-scripts\") pod \"3492cbfb-1b2a-4a4b-bad4-d577328a4fcd\" (UID: \"3492cbfb-1b2a-4a4b-bad4-d577328a4fcd\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.115473 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpbtk\" (UniqueName: \"kubernetes.io/projected/3492cbfb-1b2a-4a4b-bad4-d577328a4fcd-kube-api-access-xpbtk\") pod \"3492cbfb-1b2a-4a4b-bad4-d577328a4fcd\" (UID: \"3492cbfb-1b2a-4a4b-bad4-d577328a4fcd\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.116251 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ced818ea-81fd-421d-b134-308b13cc2076-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ced818ea-81fd-421d-b134-308b13cc2076" (UID: "ced818ea-81fd-421d-b134-308b13cc2076"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.117012 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ced818ea-81fd-421d-b134-308b13cc2076-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.117917 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c035-account-create-update-tfx4j" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.118204 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e13bff-170f-4963-856b-406322ac0ab4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79e13bff-170f-4963-856b-406322ac0ab4" (UID: "79e13bff-170f-4963-856b-406322ac0ab4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.118547 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3" (UID: "8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.118881 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3492cbfb-1b2a-4a4b-bad4-d577328a4fcd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3492cbfb-1b2a-4a4b-bad4-d577328a4fcd" (UID: "3492cbfb-1b2a-4a4b-bad4-d577328a4fcd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.120803 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ced818ea-81fd-421d-b134-308b13cc2076-kube-api-access-j4qzw" (OuterVolumeSpecName: "kube-api-access-j4qzw") pod "ced818ea-81fd-421d-b134-308b13cc2076" (UID: "ced818ea-81fd-421d-b134-308b13cc2076"). InnerVolumeSpecName "kube-api-access-j4qzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.121164 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba88d25c-da6d-421d-bfe5-cee0ea2d9b96-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba88d25c-da6d-421d-bfe5-cee0ea2d9b96" (UID: "ba88d25c-da6d-421d-bfe5-cee0ea2d9b96"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.124725 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b04ada6c-7744-4237-8361-9c5cccad61b3" (UID: "b04ada6c-7744-4237-8361-9c5cccad61b3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.125986 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3492cbfb-1b2a-4a4b-bad4-d577328a4fcd-kube-api-access-xpbtk" (OuterVolumeSpecName: "kube-api-access-xpbtk") pod "3492cbfb-1b2a-4a4b-bad4-d577328a4fcd" (UID: "3492cbfb-1b2a-4a4b-bad4-d577328a4fcd"). InnerVolumeSpecName "kube-api-access-xpbtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.131246 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b04ada6c-7744-4237-8361-9c5cccad61b3-kube-api-access-2zp4n" (OuterVolumeSpecName: "kube-api-access-2zp4n") pod "b04ada6c-7744-4237-8361-9c5cccad61b3" (UID: "b04ada6c-7744-4237-8361-9c5cccad61b3"). InnerVolumeSpecName "kube-api-access-2zp4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.171594 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba88d25c-da6d-421d-bfe5-cee0ea2d9b96-kube-api-access-bp6s9" (OuterVolumeSpecName: "kube-api-access-bp6s9") pod "ba88d25c-da6d-421d-bfe5-cee0ea2d9b96" (UID: "ba88d25c-da6d-421d-bfe5-cee0ea2d9b96"). InnerVolumeSpecName "kube-api-access-bp6s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.188795 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e13bff-170f-4963-856b-406322ac0ab4-kube-api-access-q9lt8" (OuterVolumeSpecName: "kube-api-access-q9lt8") pod "79e13bff-170f-4963-856b-406322ac0ab4" (UID: "79e13bff-170f-4963-856b-406322ac0ab4"). InnerVolumeSpecName "kube-api-access-q9lt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.196069 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3-kube-api-access-hmk4h" (OuterVolumeSpecName: "kube-api-access-hmk4h") pod "8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3" (UID: "8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3"). InnerVolumeSpecName "kube-api-access-hmk4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.218983 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65732d4f-b7b6-44fd-a200-22f9a70de277-operator-scripts\") pod \"65732d4f-b7b6-44fd-a200-22f9a70de277\" (UID: \"65732d4f-b7b6-44fd-a200-22f9a70de277\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.219087 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mq89\" (UniqueName: \"kubernetes.io/projected/65732d4f-b7b6-44fd-a200-22f9a70de277-kube-api-access-7mq89\") pod \"65732d4f-b7b6-44fd-a200-22f9a70de277\" (UID: \"65732d4f-b7b6-44fd-a200-22f9a70de277\") " Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.220331 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba88d25c-da6d-421d-bfe5-cee0ea2d9b96-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.220363 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9lt8\" (UniqueName: \"kubernetes.io/projected/79e13bff-170f-4963-856b-406322ac0ab4-kube-api-access-q9lt8\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.220572 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmk4h\" (UniqueName: \"kubernetes.io/projected/8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3-kube-api-access-hmk4h\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.220696 4792 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.220717 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zp4n\" (UniqueName: \"kubernetes.io/projected/b04ada6c-7744-4237-8361-9c5cccad61b3-kube-api-access-2zp4n\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.220731 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4qzw\" (UniqueName: \"kubernetes.io/projected/ced818ea-81fd-421d-b134-308b13cc2076-kube-api-access-j4qzw\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.220744 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp6s9\" (UniqueName: \"kubernetes.io/projected/ba88d25c-da6d-421d-bfe5-cee0ea2d9b96-kube-api-access-bp6s9\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.220756 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79e13bff-170f-4963-856b-406322ac0ab4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.220770 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.220786 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3492cbfb-1b2a-4a4b-bad4-d577328a4fcd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.220799 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpbtk\" (UniqueName: \"kubernetes.io/projected/3492cbfb-1b2a-4a4b-bad4-d577328a4fcd-kube-api-access-xpbtk\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.224666 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65732d4f-b7b6-44fd-a200-22f9a70de277-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65732d4f-b7b6-44fd-a200-22f9a70de277" (UID: "65732d4f-b7b6-44fd-a200-22f9a70de277"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.226875 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65732d4f-b7b6-44fd-a200-22f9a70de277-kube-api-access-7mq89" (OuterVolumeSpecName: "kube-api-access-7mq89") pod "65732d4f-b7b6-44fd-a200-22f9a70de277" (UID: "65732d4f-b7b6-44fd-a200-22f9a70de277"). InnerVolumeSpecName "kube-api-access-7mq89". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.239720 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b04ada6c-7744-4237-8361-9c5cccad61b3" (UID: "b04ada6c-7744-4237-8361-9c5cccad61b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.278256 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-config-data" (OuterVolumeSpecName: "config-data") pod "b04ada6c-7744-4237-8361-9c5cccad61b3" (UID: "b04ada6c-7744-4237-8361-9c5cccad61b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.322525 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65732d4f-b7b6-44fd-a200-22f9a70de277-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.322558 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mq89\" (UniqueName: \"kubernetes.io/projected/65732d4f-b7b6-44fd-a200-22f9a70de277-kube-api-access-7mq89\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.322570 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.322580 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b04ada6c-7744-4237-8361-9c5cccad61b3-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.557844 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-545fj" event={"ID":"ba88d25c-da6d-421d-bfe5-cee0ea2d9b96","Type":"ContainerDied","Data":"e3986b7e3bc53d9f299f289a35f347315689cb9e759513aec310d20232123770"} Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.557874 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-545fj" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.557893 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3986b7e3bc53d9f299f289a35f347315689cb9e759513aec310d20232123770" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.560128 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cg4t6" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.560123 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cg4t6" event={"ID":"79e13bff-170f-4963-856b-406322ac0ab4","Type":"ContainerDied","Data":"b640ed9e0cdeae5ebb1d2c2b0797d5df67b332fa78d1b096a6141e61b050a6c5"} Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.560331 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b640ed9e0cdeae5ebb1d2c2b0797d5df67b332fa78d1b096a6141e61b050a6c5" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.562981 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pdqxz" event={"ID":"fb873a37-184a-4086-b6c6-3164fa76cbce","Type":"ContainerStarted","Data":"701477e73b8a7e892e10f998ffe3b15379564860acf8085515f92229be2fb539"} Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.566911 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-hztnj" event={"ID":"8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3","Type":"ContainerDied","Data":"e2085455b1051bb8ae802f43b260f5f6076d155a1add6677edb86b63a96f9db8"} Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.566958 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2085455b1051bb8ae802f43b260f5f6076d155a1add6677edb86b63a96f9db8" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.567034 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hztnj" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.571146 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0b23-account-create-update-q8c7m" event={"ID":"ced818ea-81fd-421d-b134-308b13cc2076","Type":"ContainerDied","Data":"b9636510134e799d5e6e21a0ce7dd6e8ad8dd2fed15b3a650a626e784e53f4ed"} Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.571176 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9636510134e799d5e6e21a0ce7dd6e8ad8dd2fed15b3a650a626e784e53f4ed" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.571224 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0b23-account-create-update-q8c7m" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.574010 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e836-account-create-update-tshc7" event={"ID":"3492cbfb-1b2a-4a4b-bad4-d577328a4fcd","Type":"ContainerDied","Data":"8932e06bbfe01d54ebd5d46782f4dd2bac90cee71318a5d2ea16310425eb7ee9"} Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.574053 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8932e06bbfe01d54ebd5d46782f4dd2bac90cee71318a5d2ea16310425eb7ee9" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.574024 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e836-account-create-update-tshc7" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.576032 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c035-account-create-update-tfx4j" event={"ID":"65732d4f-b7b6-44fd-a200-22f9a70de277","Type":"ContainerDied","Data":"43ea411a8bdf84751721c482889488eba094c28796d9c141aa4fc4c18a95b033"} Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.576067 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43ea411a8bdf84751721c482889488eba094c28796d9c141aa4fc4c18a95b033" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.576132 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c035-account-create-update-tfx4j" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.587241 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5m4nt" event={"ID":"b04ada6c-7744-4237-8361-9c5cccad61b3","Type":"ContainerDied","Data":"b5eab511a2a510293f4d2dabcb054b042330950cd1d525ea753e37d30b09a75f"} Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.587317 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5eab511a2a510293f4d2dabcb054b042330950cd1d525ea753e37d30b09a75f" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.587379 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5m4nt" Nov 27 17:30:57 crc kubenswrapper[4792]: I1127 17:30:57.593293 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-pdqxz" podStartSLOduration=2.8631878139999998 podStartE2EDuration="8.593275874s" podCreationTimestamp="2025-11-27 17:30:49 +0000 UTC" firstStartedPulling="2025-11-27 17:30:51.187132843 +0000 UTC m=+1273.529959161" lastFinishedPulling="2025-11-27 17:30:56.917220893 +0000 UTC m=+1279.260047221" observedRunningTime="2025-11-27 17:30:57.587170722 +0000 UTC m=+1279.929997050" watchObservedRunningTime="2025-11-27 17:30:57.593275874 +0000 UTC m=+1279.936102192" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.727137 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-229wk"] Nov 27 17:30:58 crc kubenswrapper[4792]: E1127 17:30:58.727960 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3" containerName="mariadb-database-create" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.727975 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3" containerName="mariadb-database-create" Nov 27 17:30:58 crc kubenswrapper[4792]: E1127 17:30:58.728060 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b04ada6c-7744-4237-8361-9c5cccad61b3" containerName="glance-db-sync" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728068 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b04ada6c-7744-4237-8361-9c5cccad61b3" containerName="glance-db-sync" Nov 27 17:30:58 crc kubenswrapper[4792]: E1127 17:30:58.728078 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba88d25c-da6d-421d-bfe5-cee0ea2d9b96" containerName="mariadb-database-create" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728085 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba88d25c-da6d-421d-bfe5-cee0ea2d9b96" containerName="mariadb-database-create" Nov 27 17:30:58 crc kubenswrapper[4792]: E1127 17:30:58.728096 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc1c911-18ed-47e3-9028-e3accc0567fe" containerName="mariadb-account-create-update" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728102 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc1c911-18ed-47e3-9028-e3accc0567fe" containerName="mariadb-account-create-update" Nov 27 17:30:58 crc kubenswrapper[4792]: E1127 17:30:58.728119 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3492cbfb-1b2a-4a4b-bad4-d577328a4fcd" containerName="mariadb-account-create-update" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728126 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3492cbfb-1b2a-4a4b-bad4-d577328a4fcd" containerName="mariadb-account-create-update" Nov 27 17:30:58 crc kubenswrapper[4792]: E1127 17:30:58.728138 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ced818ea-81fd-421d-b134-308b13cc2076" containerName="mariadb-account-create-update" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728145 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ced818ea-81fd-421d-b134-308b13cc2076" containerName="mariadb-account-create-update" Nov 27 17:30:58 crc kubenswrapper[4792]: E1127 17:30:58.728157 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e13bff-170f-4963-856b-406322ac0ab4" containerName="mariadb-database-create" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728162 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e13bff-170f-4963-856b-406322ac0ab4" containerName="mariadb-database-create" Nov 27 17:30:58 crc kubenswrapper[4792]: E1127 17:30:58.728174 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65732d4f-b7b6-44fd-a200-22f9a70de277" containerName="mariadb-account-create-update" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728180 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="65732d4f-b7b6-44fd-a200-22f9a70de277" containerName="mariadb-account-create-update" Nov 27 17:30:58 crc kubenswrapper[4792]: E1127 17:30:58.728192 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa1189f-a300-44d7-9a8b-008bc13f13f7" containerName="mariadb-database-create" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728199 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa1189f-a300-44d7-9a8b-008bc13f13f7" containerName="mariadb-database-create" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728378 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3492cbfb-1b2a-4a4b-bad4-d577328a4fcd" containerName="mariadb-account-create-update" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728391 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ced818ea-81fd-421d-b134-308b13cc2076" containerName="mariadb-account-create-update" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728398 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e13bff-170f-4963-856b-406322ac0ab4" containerName="mariadb-database-create" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728407 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba88d25c-da6d-421d-bfe5-cee0ea2d9b96" containerName="mariadb-database-create" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728416 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="65732d4f-b7b6-44fd-a200-22f9a70de277" containerName="mariadb-account-create-update" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728423 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b04ada6c-7744-4237-8361-9c5cccad61b3" containerName="glance-db-sync" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728435 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fc1c911-18ed-47e3-9028-e3accc0567fe" containerName="mariadb-account-create-update" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728448 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa1189f-a300-44d7-9a8b-008bc13f13f7" containerName="mariadb-database-create" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.728456 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3" containerName="mariadb-database-create" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.729550 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.733373 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-229wk"] Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.864862 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-config\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.865166 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-dns-svc\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.865296 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.865402 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.865543 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lqg2\" (UniqueName: \"kubernetes.io/projected/6b34f1af-7df6-4102-924b-db57a7d95418-kube-api-access-7lqg2\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.865693 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.967372 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-dns-svc\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.967716 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.967864 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.968013 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lqg2\" (UniqueName: \"kubernetes.io/projected/6b34f1af-7df6-4102-924b-db57a7d95418-kube-api-access-7lqg2\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.968141 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.968352 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-dns-svc\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.968364 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-config\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.968665 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-dns-swift-storage-0\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.968790 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-ovsdbserver-sb\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.968910 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-ovsdbserver-nb\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.969034 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-config\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:58 crc kubenswrapper[4792]: I1127 17:30:58.995445 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lqg2\" (UniqueName: \"kubernetes.io/projected/6b34f1af-7df6-4102-924b-db57a7d95418-kube-api-access-7lqg2\") pod \"dnsmasq-dns-895cf5cf-229wk\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:59 crc kubenswrapper[4792]: I1127 17:30:59.063126 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:30:59 crc kubenswrapper[4792]: I1127 17:30:59.690081 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-229wk"] Nov 27 17:30:59 crc kubenswrapper[4792]: W1127 17:30:59.699260 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b34f1af_7df6_4102_924b_db57a7d95418.slice/crio-4aec4257d1b75c5d322471d45f3dce1a20a8d11d447e5802e7e44b9a77ff584c WatchSource:0}: Error finding container 4aec4257d1b75c5d322471d45f3dce1a20a8d11d447e5802e7e44b9a77ff584c: Status 404 returned error can't find the container with id 4aec4257d1b75c5d322471d45f3dce1a20a8d11d447e5802e7e44b9a77ff584c Nov 27 17:31:00 crc kubenswrapper[4792]: I1127 17:31:00.639061 4792 generic.go:334] "Generic (PLEG): container finished" podID="6b34f1af-7df6-4102-924b-db57a7d95418" containerID="d934c558023e82468af33c7cccdf9e0a0d0f28f8b9b780e186e0b2e2bed1cceb" exitCode=0 Nov 27 17:31:00 crc kubenswrapper[4792]: I1127 17:31:00.639205 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-229wk" event={"ID":"6b34f1af-7df6-4102-924b-db57a7d95418","Type":"ContainerDied","Data":"d934c558023e82468af33c7cccdf9e0a0d0f28f8b9b780e186e0b2e2bed1cceb"} Nov 27 17:31:00 crc kubenswrapper[4792]: I1127 17:31:00.639383 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-229wk" event={"ID":"6b34f1af-7df6-4102-924b-db57a7d95418","Type":"ContainerStarted","Data":"4aec4257d1b75c5d322471d45f3dce1a20a8d11d447e5802e7e44b9a77ff584c"} Nov 27 17:31:00 crc kubenswrapper[4792]: I1127 17:31:00.641973 4792 generic.go:334] "Generic (PLEG): container finished" podID="fb873a37-184a-4086-b6c6-3164fa76cbce" containerID="701477e73b8a7e892e10f998ffe3b15379564860acf8085515f92229be2fb539" exitCode=0 Nov 27 17:31:00 crc kubenswrapper[4792]: I1127 17:31:00.642022 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pdqxz" event={"ID":"fb873a37-184a-4086-b6c6-3164fa76cbce","Type":"ContainerDied","Data":"701477e73b8a7e892e10f998ffe3b15379564860acf8085515f92229be2fb539"} Nov 27 17:31:01 crc kubenswrapper[4792]: I1127 17:31:01.653910 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-229wk" event={"ID":"6b34f1af-7df6-4102-924b-db57a7d95418","Type":"ContainerStarted","Data":"9a8dcb55304c3b35ae5206aad8f8562fbda9f021448282bf242536a1c7890ba0"} Nov 27 17:31:01 crc kubenswrapper[4792]: I1127 17:31:01.654264 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:31:01 crc kubenswrapper[4792]: I1127 17:31:01.805964 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-895cf5cf-229wk" podStartSLOduration=3.805943445 podStartE2EDuration="3.805943445s" podCreationTimestamp="2025-11-27 17:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:01.742203097 +0000 UTC m=+1284.085029415" watchObservedRunningTime="2025-11-27 17:31:01.805943445 +0000 UTC m=+1284.148769763" Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.183590 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pdqxz" Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.247198 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb873a37-184a-4086-b6c6-3164fa76cbce-combined-ca-bundle\") pod \"fb873a37-184a-4086-b6c6-3164fa76cbce\" (UID: \"fb873a37-184a-4086-b6c6-3164fa76cbce\") " Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.248445 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb873a37-184a-4086-b6c6-3164fa76cbce-config-data\") pod \"fb873a37-184a-4086-b6c6-3164fa76cbce\" (UID: \"fb873a37-184a-4086-b6c6-3164fa76cbce\") " Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.248561 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n2hz\" (UniqueName: \"kubernetes.io/projected/fb873a37-184a-4086-b6c6-3164fa76cbce-kube-api-access-8n2hz\") pod \"fb873a37-184a-4086-b6c6-3164fa76cbce\" (UID: \"fb873a37-184a-4086-b6c6-3164fa76cbce\") " Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.254937 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb873a37-184a-4086-b6c6-3164fa76cbce-kube-api-access-8n2hz" (OuterVolumeSpecName: "kube-api-access-8n2hz") pod "fb873a37-184a-4086-b6c6-3164fa76cbce" (UID: "fb873a37-184a-4086-b6c6-3164fa76cbce"). InnerVolumeSpecName "kube-api-access-8n2hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.282776 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb873a37-184a-4086-b6c6-3164fa76cbce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb873a37-184a-4086-b6c6-3164fa76cbce" (UID: "fb873a37-184a-4086-b6c6-3164fa76cbce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.315106 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb873a37-184a-4086-b6c6-3164fa76cbce-config-data" (OuterVolumeSpecName: "config-data") pod "fb873a37-184a-4086-b6c6-3164fa76cbce" (UID: "fb873a37-184a-4086-b6c6-3164fa76cbce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.352979 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb873a37-184a-4086-b6c6-3164fa76cbce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.353015 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb873a37-184a-4086-b6c6-3164fa76cbce-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.353028 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n2hz\" (UniqueName: \"kubernetes.io/projected/fb873a37-184a-4086-b6c6-3164fa76cbce-kube-api-access-8n2hz\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.674262 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pdqxz" Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.680866 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pdqxz" event={"ID":"fb873a37-184a-4086-b6c6-3164fa76cbce","Type":"ContainerDied","Data":"4bb4db41dac1d6ccb2436a13719e3fc3fcb48b9a9783ae16938bee03206947ef"} Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.680928 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bb4db41dac1d6ccb2436a13719e3fc3fcb48b9a9783ae16938bee03206947ef" Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.958334 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-229wk"] Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.984664 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-sj89x"] Nov 27 17:31:02 crc kubenswrapper[4792]: E1127 17:31:02.985142 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb873a37-184a-4086-b6c6-3164fa76cbce" containerName="keystone-db-sync" Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.985163 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb873a37-184a-4086-b6c6-3164fa76cbce" containerName="keystone-db-sync" Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.985407 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb873a37-184a-4086-b6c6-3164fa76cbce" containerName="keystone-db-sync" Nov 27 17:31:02 crc kubenswrapper[4792]: I1127 17:31:02.986683 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.024208 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-sj89x"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.076959 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dlbr2"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.078189 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-config\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.078273 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.078300 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.078338 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.078343 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.079887 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm787\" (UniqueName: \"kubernetes.io/projected/5a9fa018-11cf-4578-9066-e53129cdd90f-kube-api-access-bm787\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.079948 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.084446 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p6xs8" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.090094 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.090279 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.097548 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.097750 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.140274 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dlbr2"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.192309 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm787\" (UniqueName: \"kubernetes.io/projected/5a9fa018-11cf-4578-9066-e53129cdd90f-kube-api-access-bm787\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.192376 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-credential-keys\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.192407 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pr5h\" (UniqueName: \"kubernetes.io/projected/9bddb34e-4227-4435-b964-8c820c84ad4c-kube-api-access-4pr5h\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.192437 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.192489 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-config-data\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.192536 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-combined-ca-bundle\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.192552 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-fernet-keys\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.192659 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-config\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.192717 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.192733 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.192793 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.192844 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-scripts\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.193879 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.194434 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-config\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.194950 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.195427 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.199300 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.199370 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-rr9ql"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.200775 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rr9ql" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.203870 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.204126 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4s5ct" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.211219 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-rr9ql"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.309961 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm787\" (UniqueName: \"kubernetes.io/projected/5a9fa018-11cf-4578-9066-e53129cdd90f-kube-api-access-bm787\") pod \"dnsmasq-dns-6c9c9f998c-sj89x\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.310768 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.311318 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrvw\" (UniqueName: \"kubernetes.io/projected/1e3f2e74-1077-4a57-9851-1113b4a46729-kube-api-access-pnrvw\") pod \"heat-db-sync-rr9ql\" (UID: \"1e3f2e74-1077-4a57-9851-1113b4a46729\") " pod="openstack/heat-db-sync-rr9ql" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.311399 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e3f2e74-1077-4a57-9851-1113b4a46729-combined-ca-bundle\") pod \"heat-db-sync-rr9ql\" (UID: \"1e3f2e74-1077-4a57-9851-1113b4a46729\") " pod="openstack/heat-db-sync-rr9ql" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.311436 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-scripts\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.311505 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-credential-keys\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.311536 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pr5h\" (UniqueName: \"kubernetes.io/projected/9bddb34e-4227-4435-b964-8c820c84ad4c-kube-api-access-4pr5h\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.311584 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e3f2e74-1077-4a57-9851-1113b4a46729-config-data\") pod \"heat-db-sync-rr9ql\" (UID: \"1e3f2e74-1077-4a57-9851-1113b4a46729\") " pod="openstack/heat-db-sync-rr9ql" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.311664 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-config-data\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.311733 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-combined-ca-bundle\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.311749 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-fernet-keys\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.321578 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-fernet-keys\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.333141 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-scripts\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.335580 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-config-data\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.335754 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-combined-ca-bundle\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.346412 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-credential-keys\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.359064 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pr5h\" (UniqueName: \"kubernetes.io/projected/9bddb34e-4227-4435-b964-8c820c84ad4c-kube-api-access-4pr5h\") pod \"keystone-bootstrap-dlbr2\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.380712 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4xhxm"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.382215 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4xhxm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.394026 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vqq54" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.394237 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.413915 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e3f2e74-1077-4a57-9851-1113b4a46729-combined-ca-bundle\") pod \"heat-db-sync-rr9ql\" (UID: \"1e3f2e74-1077-4a57-9851-1113b4a46729\") " pod="openstack/heat-db-sync-rr9ql" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.414015 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e3f2e74-1077-4a57-9851-1113b4a46729-config-data\") pod \"heat-db-sync-rr9ql\" (UID: \"1e3f2e74-1077-4a57-9851-1113b4a46729\") " pod="openstack/heat-db-sync-rr9ql" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.414104 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnrvw\" (UniqueName: \"kubernetes.io/projected/1e3f2e74-1077-4a57-9851-1113b4a46729-kube-api-access-pnrvw\") pod \"heat-db-sync-rr9ql\" (UID: \"1e3f2e74-1077-4a57-9851-1113b4a46729\") " pod="openstack/heat-db-sync-rr9ql" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.422905 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e3f2e74-1077-4a57-9851-1113b4a46729-config-data\") pod \"heat-db-sync-rr9ql\" (UID: \"1e3f2e74-1077-4a57-9851-1113b4a46729\") " pod="openstack/heat-db-sync-rr9ql" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.423872 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e3f2e74-1077-4a57-9851-1113b4a46729-combined-ca-bundle\") pod \"heat-db-sync-rr9ql\" (UID: \"1e3f2e74-1077-4a57-9851-1113b4a46729\") " pod="openstack/heat-db-sync-rr9ql" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.441871 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.456744 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-vcdv9"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.464321 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnrvw\" (UniqueName: \"kubernetes.io/projected/1e3f2e74-1077-4a57-9851-1113b4a46729-kube-api-access-pnrvw\") pod \"heat-db-sync-rr9ql\" (UID: \"1e3f2e74-1077-4a57-9851-1113b4a46729\") " pod="openstack/heat-db-sync-rr9ql" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.471003 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.491469 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.491515 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.491789 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5lcgs" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.530102 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rr9ql" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.549891 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4xhxm"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.552347 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b22e78cf-a700-4a68-8b79-fdb0dc988a04-db-sync-config-data\") pod \"barbican-db-sync-4xhxm\" (UID: \"b22e78cf-a700-4a68-8b79-fdb0dc988a04\") " pod="openstack/barbican-db-sync-4xhxm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.552439 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b22e78cf-a700-4a68-8b79-fdb0dc988a04-combined-ca-bundle\") pod \"barbican-db-sync-4xhxm\" (UID: \"b22e78cf-a700-4a68-8b79-fdb0dc988a04\") " pod="openstack/barbican-db-sync-4xhxm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.552551 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrk7d\" (UniqueName: \"kubernetes.io/projected/b22e78cf-a700-4a68-8b79-fdb0dc988a04-kube-api-access-xrk7d\") pod \"barbican-db-sync-4xhxm\" (UID: \"b22e78cf-a700-4a68-8b79-fdb0dc988a04\") " pod="openstack/barbican-db-sync-4xhxm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.621791 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vcdv9"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.657363 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-config-data\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.657500 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrk7d\" (UniqueName: \"kubernetes.io/projected/b22e78cf-a700-4a68-8b79-fdb0dc988a04-kube-api-access-xrk7d\") pod \"barbican-db-sync-4xhxm\" (UID: \"b22e78cf-a700-4a68-8b79-fdb0dc988a04\") " pod="openstack/barbican-db-sync-4xhxm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.657558 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg2wg\" (UniqueName: \"kubernetes.io/projected/2f1c2409-1610-4ede-ab33-880b170c802f-kube-api-access-sg2wg\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.657610 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-combined-ca-bundle\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.657848 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f1c2409-1610-4ede-ab33-880b170c802f-etc-machine-id\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.657903 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b22e78cf-a700-4a68-8b79-fdb0dc988a04-db-sync-config-data\") pod \"barbican-db-sync-4xhxm\" (UID: \"b22e78cf-a700-4a68-8b79-fdb0dc988a04\") " pod="openstack/barbican-db-sync-4xhxm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.657922 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-scripts\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.658387 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-db-sync-config-data\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.658656 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b22e78cf-a700-4a68-8b79-fdb0dc988a04-combined-ca-bundle\") pod \"barbican-db-sync-4xhxm\" (UID: \"b22e78cf-a700-4a68-8b79-fdb0dc988a04\") " pod="openstack/barbican-db-sync-4xhxm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.667411 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-nnctq"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.669574 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b22e78cf-a700-4a68-8b79-fdb0dc988a04-combined-ca-bundle\") pod \"barbican-db-sync-4xhxm\" (UID: \"b22e78cf-a700-4a68-8b79-fdb0dc988a04\") " pod="openstack/barbican-db-sync-4xhxm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.671377 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b22e78cf-a700-4a68-8b79-fdb0dc988a04-db-sync-config-data\") pod \"barbican-db-sync-4xhxm\" (UID: \"b22e78cf-a700-4a68-8b79-fdb0dc988a04\") " pod="openstack/barbican-db-sync-4xhxm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.671695 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nnctq" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.676905 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.678598 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-st6rt" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.679407 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.696612 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrk7d\" (UniqueName: \"kubernetes.io/projected/b22e78cf-a700-4a68-8b79-fdb0dc988a04-kube-api-access-xrk7d\") pod \"barbican-db-sync-4xhxm\" (UID: \"b22e78cf-a700-4a68-8b79-fdb0dc988a04\") " pod="openstack/barbican-db-sync-4xhxm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.704881 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-895cf5cf-229wk" podUID="6b34f1af-7df6-4102-924b-db57a7d95418" containerName="dnsmasq-dns" containerID="cri-o://9a8dcb55304c3b35ae5206aad8f8562fbda9f021448282bf242536a1c7890ba0" gracePeriod=10 Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.708472 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nnctq"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.720628 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-658m9"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.723346 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-658m9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.726186 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.726310 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.726598 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ljsgq" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.760035 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-658m9"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.761821 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg2wg\" (UniqueName: \"kubernetes.io/projected/2f1c2409-1610-4ede-ab33-880b170c802f-kube-api-access-sg2wg\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.761931 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-combined-ca-bundle\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.762092 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f1c2409-1610-4ede-ab33-880b170c802f-etc-machine-id\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.762207 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-scripts\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.762296 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-db-sync-config-data\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.762483 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-config-data\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.763265 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f1c2409-1610-4ede-ab33-880b170c802f-etc-machine-id\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.771507 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-config-data\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.772640 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-combined-ca-bundle\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.773800 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-db-sync-config-data\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.782395 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-scripts\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.793593 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg2wg\" (UniqueName: \"kubernetes.io/projected/2f1c2409-1610-4ede-ab33-880b170c802f-kube-api-access-sg2wg\") pod \"cinder-db-sync-vcdv9\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.815719 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-sj89x"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.846283 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-r62mm"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.852886 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.864152 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-combined-ca-bundle\") pod \"placement-db-sync-658m9\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " pod="openstack/placement-db-sync-658m9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.864230 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-config-data\") pod \"placement-db-sync-658m9\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " pod="openstack/placement-db-sync-658m9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.864314 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bacfd25c-6929-437c-887b-02b5b3f33b1e-config\") pod \"neutron-db-sync-nnctq\" (UID: \"bacfd25c-6929-437c-887b-02b5b3f33b1e\") " pod="openstack/neutron-db-sync-nnctq" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.864369 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/821eeff5-b48a-4380-986e-9a9f3bb929eb-logs\") pod \"placement-db-sync-658m9\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " pod="openstack/placement-db-sync-658m9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.864421 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5bqs\" (UniqueName: \"kubernetes.io/projected/bacfd25c-6929-437c-887b-02b5b3f33b1e-kube-api-access-v5bqs\") pod \"neutron-db-sync-nnctq\" (UID: \"bacfd25c-6929-437c-887b-02b5b3f33b1e\") " pod="openstack/neutron-db-sync-nnctq" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.864448 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-scripts\") pod \"placement-db-sync-658m9\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " pod="openstack/placement-db-sync-658m9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.864502 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7lh4\" (UniqueName: \"kubernetes.io/projected/821eeff5-b48a-4380-986e-9a9f3bb929eb-kube-api-access-b7lh4\") pod \"placement-db-sync-658m9\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " pod="openstack/placement-db-sync-658m9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.864559 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacfd25c-6929-437c-887b-02b5b3f33b1e-combined-ca-bundle\") pod \"neutron-db-sync-nnctq\" (UID: \"bacfd25c-6929-437c-887b-02b5b3f33b1e\") " pod="openstack/neutron-db-sync-nnctq" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.871732 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-r62mm"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.888991 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4xhxm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.944181 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.965954 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/821eeff5-b48a-4380-986e-9a9f3bb929eb-logs\") pod \"placement-db-sync-658m9\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " pod="openstack/placement-db-sync-658m9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.965999 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.966027 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7p74\" (UniqueName: \"kubernetes.io/projected/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-kube-api-access-x7p74\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.966049 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5bqs\" (UniqueName: \"kubernetes.io/projected/bacfd25c-6929-437c-887b-02b5b3f33b1e-kube-api-access-v5bqs\") pod \"neutron-db-sync-nnctq\" (UID: \"bacfd25c-6929-437c-887b-02b5b3f33b1e\") " pod="openstack/neutron-db-sync-nnctq" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.966072 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-scripts\") pod \"placement-db-sync-658m9\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " pod="openstack/placement-db-sync-658m9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.966101 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.966132 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7lh4\" (UniqueName: \"kubernetes.io/projected/821eeff5-b48a-4380-986e-9a9f3bb929eb-kube-api-access-b7lh4\") pod \"placement-db-sync-658m9\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " pod="openstack/placement-db-sync-658m9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.966162 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacfd25c-6929-437c-887b-02b5b3f33b1e-combined-ca-bundle\") pod \"neutron-db-sync-nnctq\" (UID: \"bacfd25c-6929-437c-887b-02b5b3f33b1e\") " pod="openstack/neutron-db-sync-nnctq" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.966223 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-combined-ca-bundle\") pod \"placement-db-sync-658m9\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " pod="openstack/placement-db-sync-658m9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.966253 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.966272 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-config-data\") pod \"placement-db-sync-658m9\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " pod="openstack/placement-db-sync-658m9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.966289 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-config\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.966324 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.966350 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bacfd25c-6929-437c-887b-02b5b3f33b1e-config\") pod \"neutron-db-sync-nnctq\" (UID: \"bacfd25c-6929-437c-887b-02b5b3f33b1e\") " pod="openstack/neutron-db-sync-nnctq" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.973422 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-combined-ca-bundle\") pod \"placement-db-sync-658m9\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " pod="openstack/placement-db-sync-658m9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.976680 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacfd25c-6929-437c-887b-02b5b3f33b1e-combined-ca-bundle\") pod \"neutron-db-sync-nnctq\" (UID: \"bacfd25c-6929-437c-887b-02b5b3f33b1e\") " pod="openstack/neutron-db-sync-nnctq" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.977663 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-scripts\") pod \"placement-db-sync-658m9\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " pod="openstack/placement-db-sync-658m9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.979352 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bacfd25c-6929-437c-887b-02b5b3f33b1e-config\") pod \"neutron-db-sync-nnctq\" (UID: \"bacfd25c-6929-437c-887b-02b5b3f33b1e\") " pod="openstack/neutron-db-sync-nnctq" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.979799 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/821eeff5-b48a-4380-986e-9a9f3bb929eb-logs\") pod \"placement-db-sync-658m9\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " pod="openstack/placement-db-sync-658m9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.980555 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-config-data\") pod \"placement-db-sync-658m9\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " pod="openstack/placement-db-sync-658m9" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.985814 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.988968 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.992045 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.992269 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 27 17:31:03 crc kubenswrapper[4792]: I1127 17:31:03.999362 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5bqs\" (UniqueName: \"kubernetes.io/projected/bacfd25c-6929-437c-887b-02b5b3f33b1e-kube-api-access-v5bqs\") pod \"neutron-db-sync-nnctq\" (UID: \"bacfd25c-6929-437c-887b-02b5b3f33b1e\") " pod="openstack/neutron-db-sync-nnctq" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.002944 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7lh4\" (UniqueName: \"kubernetes.io/projected/821eeff5-b48a-4380-986e-9a9f3bb929eb-kube-api-access-b7lh4\") pod \"placement-db-sync-658m9\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " pod="openstack/placement-db-sync-658m9" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.013444 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nnctq" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.021542 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.056610 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-658m9" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.069063 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.069192 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.069221 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-config\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.069272 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.069330 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.069363 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7p74\" (UniqueName: \"kubernetes.io/projected/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-kube-api-access-x7p74\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.071949 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.072938 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.073103 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-config\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.074711 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.080931 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.101396 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7p74\" (UniqueName: \"kubernetes.io/projected/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-kube-api-access-x7p74\") pod \"dnsmasq-dns-57c957c4ff-r62mm\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.131455 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.133229 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.140835 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.141387 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.142899 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.143511 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4s87f" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.151895 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.172841 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.172932 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-config-data\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.172958 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3efaa573-2d1c-4668-a6bc-b50aa892a299-log-httpd\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.173003 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.173247 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3efaa573-2d1c-4668-a6bc-b50aa892a299-run-httpd\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.173351 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-scripts\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.173558 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thcrd\" (UniqueName: \"kubernetes.io/projected/3efaa573-2d1c-4668-a6bc-b50aa892a299-kube-api-access-thcrd\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.176190 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.231202 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.232978 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.242693 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.242849 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.272774 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.289704 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.291437 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-config-data\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.291469 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3efaa573-2d1c-4668-a6bc-b50aa892a299-log-httpd\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.291531 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.291571 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.291593 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.291656 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.291712 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3efaa573-2d1c-4668-a6bc-b50aa892a299-run-httpd\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.291742 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-logs\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.291779 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-scripts\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.291837 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.291877 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.291991 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thcrd\" (UniqueName: \"kubernetes.io/projected/3efaa573-2d1c-4668-a6bc-b50aa892a299-kube-api-access-thcrd\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.292088 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.292119 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hhc2\" (UniqueName: \"kubernetes.io/projected/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-kube-api-access-2hhc2\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.292767 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3efaa573-2d1c-4668-a6bc-b50aa892a299-run-httpd\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.293075 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3efaa573-2d1c-4668-a6bc-b50aa892a299-log-httpd\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.304740 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-scripts\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.313611 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.314221 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-config-data\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.355158 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.404829 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed8ec200-a1d8-482e-b2f3-4d091f491625-logs\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.405125 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.405213 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed8ec200-a1d8-482e-b2f3-4d091f491625-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.405330 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.405417 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.405504 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.405582 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.405697 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-logs\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.405804 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.405878 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.405989 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.406073 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.406162 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.406233 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hhc2\" (UniqueName: \"kubernetes.io/projected/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-kube-api-access-2hhc2\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.406307 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.406373 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgv8r\" (UniqueName: \"kubernetes.io/projected/ed8ec200-a1d8-482e-b2f3-4d091f491625-kube-api-access-hgv8r\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.418344 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.419952 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.424411 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-logs\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.425529 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.441869 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.444029 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-scripts\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.446992 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-config-data\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.458570 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thcrd\" (UniqueName: \"kubernetes.io/projected/3efaa573-2d1c-4668-a6bc-b50aa892a299-kube-api-access-thcrd\") pod \"ceilometer-0\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.481349 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-sj89x"] Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.485612 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hhc2\" (UniqueName: \"kubernetes.io/projected/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-kube-api-access-2hhc2\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.510331 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.510367 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgv8r\" (UniqueName: \"kubernetes.io/projected/ed8ec200-a1d8-482e-b2f3-4d091f491625-kube-api-access-hgv8r\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.510403 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed8ec200-a1d8-482e-b2f3-4d091f491625-logs\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.510433 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.510471 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed8ec200-a1d8-482e-b2f3-4d091f491625-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.510507 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.510621 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.517382 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed8ec200-a1d8-482e-b2f3-4d091f491625-logs\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.521730 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.527739 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.544113 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed8ec200-a1d8-482e-b2f3-4d091f491625-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.544203 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.565349 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.575834 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dlbr2"] Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.581384 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgv8r\" (UniqueName: \"kubernetes.io/projected/ed8ec200-a1d8-482e-b2f3-4d091f491625-kube-api-access-hgv8r\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.582958 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.587573 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.592563 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.629144 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.705918 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.748047 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-rr9ql"] Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.750610 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rr9ql" event={"ID":"1e3f2e74-1077-4a57-9851-1113b4a46729","Type":"ContainerStarted","Data":"dcae7ef47fb8cc118531991899b8c60db8ae7e417ee23a148e0db01885b9e212"} Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.760921 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" event={"ID":"5a9fa018-11cf-4578-9066-e53129cdd90f","Type":"ContainerStarted","Data":"79c816468c76ed804dae0fbf45c85a601724c0785da235a5dd69bac89f7d6ef0"} Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.763432 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dlbr2" event={"ID":"9bddb34e-4227-4435-b964-8c820c84ad4c","Type":"ContainerStarted","Data":"f3b2ab91947935d0fef756f31df568279f553e95c476bf0abdbbf4df0908d7c8"} Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.764177 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.765906 4792 generic.go:334] "Generic (PLEG): container finished" podID="6b34f1af-7df6-4102-924b-db57a7d95418" containerID="9a8dcb55304c3b35ae5206aad8f8562fbda9f021448282bf242536a1c7890ba0" exitCode=0 Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.765935 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-229wk" event={"ID":"6b34f1af-7df6-4102-924b-db57a7d95418","Type":"ContainerDied","Data":"9a8dcb55304c3b35ae5206aad8f8562fbda9f021448282bf242536a1c7890ba0"} Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.765956 4792 scope.go:117] "RemoveContainer" containerID="9a8dcb55304c3b35ae5206aad8f8562fbda9f021448282bf242536a1c7890ba0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.816740 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.836178 4792 scope.go:117] "RemoveContainer" containerID="d934c558023e82468af33c7cccdf9e0a0d0f28f8b9b780e186e0b2e2bed1cceb" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.853555 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lqg2\" (UniqueName: \"kubernetes.io/projected/6b34f1af-7df6-4102-924b-db57a7d95418-kube-api-access-7lqg2\") pod \"6b34f1af-7df6-4102-924b-db57a7d95418\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.853679 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-dns-swift-storage-0\") pod \"6b34f1af-7df6-4102-924b-db57a7d95418\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.853781 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-ovsdbserver-sb\") pod \"6b34f1af-7df6-4102-924b-db57a7d95418\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.853904 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-config\") pod \"6b34f1af-7df6-4102-924b-db57a7d95418\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.853930 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-ovsdbserver-nb\") pod \"6b34f1af-7df6-4102-924b-db57a7d95418\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.853993 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-dns-svc\") pod \"6b34f1af-7df6-4102-924b-db57a7d95418\" (UID: \"6b34f1af-7df6-4102-924b-db57a7d95418\") " Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.862530 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.879131 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b34f1af-7df6-4102-924b-db57a7d95418-kube-api-access-7lqg2" (OuterVolumeSpecName: "kube-api-access-7lqg2") pod "6b34f1af-7df6-4102-924b-db57a7d95418" (UID: "6b34f1af-7df6-4102-924b-db57a7d95418"). InnerVolumeSpecName "kube-api-access-7lqg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.956275 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lqg2\" (UniqueName: \"kubernetes.io/projected/6b34f1af-7df6-4102-924b-db57a7d95418-kube-api-access-7lqg2\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.960448 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vcdv9"] Nov 27 17:31:04 crc kubenswrapper[4792]: I1127 17:31:04.995978 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4xhxm"] Nov 27 17:31:05 crc kubenswrapper[4792]: W1127 17:31:05.015714 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb22e78cf_a700_4a68_8b79_fdb0dc988a04.slice/crio-2aecc80265509e9d667bfc705612c6a9134c5a7b7eb881d1f28d480ba6661703 WatchSource:0}: Error finding container 2aecc80265509e9d667bfc705612c6a9134c5a7b7eb881d1f28d480ba6661703: Status 404 returned error can't find the container with id 2aecc80265509e9d667bfc705612c6a9134c5a7b7eb881d1f28d480ba6661703 Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.084207 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nnctq"] Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.120847 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-658m9"] Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.128438 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6b34f1af-7df6-4102-924b-db57a7d95418" (UID: "6b34f1af-7df6-4102-924b-db57a7d95418"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.171681 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.222870 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6b34f1af-7df6-4102-924b-db57a7d95418" (UID: "6b34f1af-7df6-4102-924b-db57a7d95418"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.279843 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-r62mm"] Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.291880 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.306807 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6b34f1af-7df6-4102-924b-db57a7d95418" (UID: "6b34f1af-7df6-4102-924b-db57a7d95418"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.343569 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6b34f1af-7df6-4102-924b-db57a7d95418" (UID: "6b34f1af-7df6-4102-924b-db57a7d95418"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.380008 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-config" (OuterVolumeSpecName: "config") pod "6b34f1af-7df6-4102-924b-db57a7d95418" (UID: "6b34f1af-7df6-4102-924b-db57a7d95418"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.399175 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.399201 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.399234 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b34f1af-7df6-4102-924b-db57a7d95418-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.443423 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.510241 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 27 17:31:05 crc kubenswrapper[4792]: W1127 17:31:05.514524 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3efaa573_2d1c_4668_a6bc_b50aa892a299.slice/crio-255890a81f6063f9140a329353dcc03e68889524b7d104e7c2f4c5862c987da9 WatchSource:0}: Error finding container 255890a81f6063f9140a329353dcc03e68889524b7d104e7c2f4c5862c987da9: Status 404 returned error can't find the container with id 255890a81f6063f9140a329353dcc03e68889524b7d104e7c2f4c5862c987da9 Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.677304 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.773533 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.822044 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3efaa573-2d1c-4668-a6bc-b50aa892a299","Type":"ContainerStarted","Data":"255890a81f6063f9140a329353dcc03e68889524b7d104e7c2f4c5862c987da9"} Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.834228 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nnctq" event={"ID":"bacfd25c-6929-437c-887b-02b5b3f33b1e","Type":"ContainerStarted","Data":"0d48279ff20a19fece2a5e5cb24e79aac9eb95651a132b35f114e7f45d345a8a"} Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.838501 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dlbr2" event={"ID":"9bddb34e-4227-4435-b964-8c820c84ad4c","Type":"ContainerStarted","Data":"081346cc5ad6b978d5675d4860468b68fc702db31b8fb05002ba8d49b31e1629"} Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.849746 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vcdv9" event={"ID":"2f1c2409-1610-4ede-ab33-880b170c802f","Type":"ContainerStarted","Data":"8f69fbb18753af4ade9673e29af17319d8e17d1c7de2bf7429b5b6599ed68001"} Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.878392 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dlbr2" podStartSLOduration=3.878374192 podStartE2EDuration="3.878374192s" podCreationTimestamp="2025-11-27 17:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:05.860944268 +0000 UTC m=+1288.203770586" watchObservedRunningTime="2025-11-27 17:31:05.878374192 +0000 UTC m=+1288.221200510" Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.910027 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" event={"ID":"c8048274-9eb3-4d70-aa9d-63bf6f4d210e","Type":"ContainerStarted","Data":"eda53e555215000e9cbec577f32382fecac20e27a2e46291f4e99d79e776a3be"} Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.924934 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c9ffb1d-c10f-4a24-b658-59b9b13228ad","Type":"ContainerStarted","Data":"e599c4d1a49e7a04f2aa94549d142369f7605010ca32987853b3706791ffd400"} Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.938374 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.960115 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-895cf5cf-229wk" event={"ID":"6b34f1af-7df6-4102-924b-db57a7d95418","Type":"ContainerDied","Data":"4aec4257d1b75c5d322471d45f3dce1a20a8d11d447e5802e7e44b9a77ff584c"} Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.960565 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-895cf5cf-229wk" Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.964245 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.967149 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-658m9" event={"ID":"821eeff5-b48a-4380-986e-9a9f3bb929eb","Type":"ContainerStarted","Data":"67b4c44b6cdca1a868fa36d5c7acf4b2af7e3e55ab33e7ba57f8e5cd834abbc1"} Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.972961 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4xhxm" event={"ID":"b22e78cf-a700-4a68-8b79-fdb0dc988a04","Type":"ContainerStarted","Data":"2aecc80265509e9d667bfc705612c6a9134c5a7b7eb881d1f28d480ba6661703"} Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.976223 4792 generic.go:334] "Generic (PLEG): container finished" podID="5a9fa018-11cf-4578-9066-e53129cdd90f" containerID="9f6acb6fe467b37aecda481c9bb4c0f548b7c095c2556acd75c34371da846622" exitCode=0 Nov 27 17:31:05 crc kubenswrapper[4792]: I1127 17:31:05.976256 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" event={"ID":"5a9fa018-11cf-4578-9066-e53129cdd90f","Type":"ContainerDied","Data":"9f6acb6fe467b37aecda481c9bb4c0f548b7c095c2556acd75c34371da846622"} Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.052331 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-229wk"] Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.063690 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-895cf5cf-229wk"] Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.586861 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.658745 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-dns-svc\") pod \"5a9fa018-11cf-4578-9066-e53129cdd90f\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.658797 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm787\" (UniqueName: \"kubernetes.io/projected/5a9fa018-11cf-4578-9066-e53129cdd90f-kube-api-access-bm787\") pod \"5a9fa018-11cf-4578-9066-e53129cdd90f\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.658870 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-ovsdbserver-nb\") pod \"5a9fa018-11cf-4578-9066-e53129cdd90f\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.658942 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-ovsdbserver-sb\") pod \"5a9fa018-11cf-4578-9066-e53129cdd90f\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.659057 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-dns-swift-storage-0\") pod \"5a9fa018-11cf-4578-9066-e53129cdd90f\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.659176 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-config\") pod \"5a9fa018-11cf-4578-9066-e53129cdd90f\" (UID: \"5a9fa018-11cf-4578-9066-e53129cdd90f\") " Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.668874 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9fa018-11cf-4578-9066-e53129cdd90f-kube-api-access-bm787" (OuterVolumeSpecName: "kube-api-access-bm787") pod "5a9fa018-11cf-4578-9066-e53129cdd90f" (UID: "5a9fa018-11cf-4578-9066-e53129cdd90f"). InnerVolumeSpecName "kube-api-access-bm787". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.688133 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5a9fa018-11cf-4578-9066-e53129cdd90f" (UID: "5a9fa018-11cf-4578-9066-e53129cdd90f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.703153 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a9fa018-11cf-4578-9066-e53129cdd90f" (UID: "5a9fa018-11cf-4578-9066-e53129cdd90f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.730995 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-config" (OuterVolumeSpecName: "config") pod "5a9fa018-11cf-4578-9066-e53129cdd90f" (UID: "5a9fa018-11cf-4578-9066-e53129cdd90f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.742098 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b34f1af-7df6-4102-924b-db57a7d95418" path="/var/lib/kubelet/pods/6b34f1af-7df6-4102-924b-db57a7d95418/volumes" Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.755106 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5a9fa018-11cf-4578-9066-e53129cdd90f" (UID: "5a9fa018-11cf-4578-9066-e53129cdd90f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.763291 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.763318 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.763328 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm787\" (UniqueName: \"kubernetes.io/projected/5a9fa018-11cf-4578-9066-e53129cdd90f-kube-api-access-bm787\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.763339 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.763348 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.772071 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5a9fa018-11cf-4578-9066-e53129cdd90f" (UID: "5a9fa018-11cf-4578-9066-e53129cdd90f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:06 crc kubenswrapper[4792]: I1127 17:31:06.867839 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a9fa018-11cf-4578-9066-e53129cdd90f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:07 crc kubenswrapper[4792]: I1127 17:31:07.036932 4792 generic.go:334] "Generic (PLEG): container finished" podID="c8048274-9eb3-4d70-aa9d-63bf6f4d210e" containerID="e3f0956425a6e41bab78df2453e071191b2a202e75285fe81dc8c9813d41a2c7" exitCode=0 Nov 27 17:31:07 crc kubenswrapper[4792]: I1127 17:31:07.037631 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" event={"ID":"c8048274-9eb3-4d70-aa9d-63bf6f4d210e","Type":"ContainerDied","Data":"e3f0956425a6e41bab78df2453e071191b2a202e75285fe81dc8c9813d41a2c7"} Nov 27 17:31:07 crc kubenswrapper[4792]: I1127 17:31:07.037693 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" event={"ID":"c8048274-9eb3-4d70-aa9d-63bf6f4d210e","Type":"ContainerStarted","Data":"73ce52adb8fdf3cb7d6f0c2b0f45dd76a92dd408329e1af66afa39ea26640272"} Nov 27 17:31:07 crc kubenswrapper[4792]: I1127 17:31:07.040341 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:07 crc kubenswrapper[4792]: I1127 17:31:07.047580 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed8ec200-a1d8-482e-b2f3-4d091f491625","Type":"ContainerStarted","Data":"3fe04918c9adf3db965967b62d8a4fc3b1032b27d61a03f31fe905204aa2413c"} Nov 27 17:31:07 crc kubenswrapper[4792]: I1127 17:31:07.055723 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nnctq" event={"ID":"bacfd25c-6929-437c-887b-02b5b3f33b1e","Type":"ContainerStarted","Data":"c722c74221b9264a985633f99d530fadbf5f4267f6a2a18c515ed968d61c7a09"} Nov 27 17:31:07 crc kubenswrapper[4792]: I1127 17:31:07.071293 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" Nov 27 17:31:07 crc kubenswrapper[4792]: I1127 17:31:07.071458 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-sj89x" event={"ID":"5a9fa018-11cf-4578-9066-e53129cdd90f","Type":"ContainerDied","Data":"79c816468c76ed804dae0fbf45c85a601724c0785da235a5dd69bac89f7d6ef0"} Nov 27 17:31:07 crc kubenswrapper[4792]: I1127 17:31:07.072216 4792 scope.go:117] "RemoveContainer" containerID="9f6acb6fe467b37aecda481c9bb4c0f548b7c095c2556acd75c34371da846622" Nov 27 17:31:07 crc kubenswrapper[4792]: I1127 17:31:07.083670 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" podStartSLOduration=4.083623875 podStartE2EDuration="4.083623875s" podCreationTimestamp="2025-11-27 17:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:07.069047362 +0000 UTC m=+1289.411873680" watchObservedRunningTime="2025-11-27 17:31:07.083623875 +0000 UTC m=+1289.426450193" Nov 27 17:31:07 crc kubenswrapper[4792]: I1127 17:31:07.102036 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-nnctq" podStartSLOduration=4.102012583 podStartE2EDuration="4.102012583s" podCreationTimestamp="2025-11-27 17:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:07.08662833 +0000 UTC m=+1289.429454648" watchObservedRunningTime="2025-11-27 17:31:07.102012583 +0000 UTC m=+1289.444838901" Nov 27 17:31:07 crc kubenswrapper[4792]: I1127 17:31:07.163808 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-sj89x"] Nov 27 17:31:07 crc kubenswrapper[4792]: I1127 17:31:07.189568 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-sj89x"] Nov 27 17:31:08 crc kubenswrapper[4792]: I1127 17:31:08.111064 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c9ffb1d-c10f-4a24-b658-59b9b13228ad","Type":"ContainerStarted","Data":"c6c9b445efdddb1688a71168a2c1e980c0bb93b889c089d08e8ffb5bcb9f588d"} Nov 27 17:31:08 crc kubenswrapper[4792]: I1127 17:31:08.119152 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed8ec200-a1d8-482e-b2f3-4d091f491625","Type":"ContainerStarted","Data":"3c9210275cdd8812ab8fb7e69541b7260a0f570f18cee97f39179fe93b453a65"} Nov 27 17:31:08 crc kubenswrapper[4792]: I1127 17:31:08.290171 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:31:08 crc kubenswrapper[4792]: I1127 17:31:08.290580 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:31:08 crc kubenswrapper[4792]: I1127 17:31:08.715889 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a9fa018-11cf-4578-9066-e53129cdd90f" path="/var/lib/kubelet/pods/5a9fa018-11cf-4578-9066-e53129cdd90f/volumes" Nov 27 17:31:11 crc kubenswrapper[4792]: I1127 17:31:11.182343 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c9ffb1d-c10f-4a24-b658-59b9b13228ad","Type":"ContainerStarted","Data":"2917a4a144949afed165f4243ed1f61ed3e06cb860be08318ff030486fbe9dbf"} Nov 27 17:31:11 crc kubenswrapper[4792]: I1127 17:31:11.182470 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1c9ffb1d-c10f-4a24-b658-59b9b13228ad" containerName="glance-log" containerID="cri-o://c6c9b445efdddb1688a71168a2c1e980c0bb93b889c089d08e8ffb5bcb9f588d" gracePeriod=30 Nov 27 17:31:11 crc kubenswrapper[4792]: I1127 17:31:11.182605 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1c9ffb1d-c10f-4a24-b658-59b9b13228ad" containerName="glance-httpd" containerID="cri-o://2917a4a144949afed165f4243ed1f61ed3e06cb860be08318ff030486fbe9dbf" gracePeriod=30 Nov 27 17:31:11 crc kubenswrapper[4792]: I1127 17:31:11.193131 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed8ec200-a1d8-482e-b2f3-4d091f491625","Type":"ContainerStarted","Data":"87471452d974abae29e38c57619344a0cf35cecb691ad70c8b7ecc83b5b951e0"} Nov 27 17:31:11 crc kubenswrapper[4792]: I1127 17:31:11.193394 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ed8ec200-a1d8-482e-b2f3-4d091f491625" containerName="glance-log" containerID="cri-o://3c9210275cdd8812ab8fb7e69541b7260a0f570f18cee97f39179fe93b453a65" gracePeriod=30 Nov 27 17:31:11 crc kubenswrapper[4792]: I1127 17:31:11.193834 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ed8ec200-a1d8-482e-b2f3-4d091f491625" containerName="glance-httpd" containerID="cri-o://87471452d974abae29e38c57619344a0cf35cecb691ad70c8b7ecc83b5b951e0" gracePeriod=30 Nov 27 17:31:11 crc kubenswrapper[4792]: I1127 17:31:11.225336 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.225312477 podStartE2EDuration="8.225312477s" podCreationTimestamp="2025-11-27 17:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:11.208867377 +0000 UTC m=+1293.551693695" watchObservedRunningTime="2025-11-27 17:31:11.225312477 +0000 UTC m=+1293.568138795" Nov 27 17:31:11 crc kubenswrapper[4792]: I1127 17:31:11.249049 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.249020838 podStartE2EDuration="8.249020838s" podCreationTimestamp="2025-11-27 17:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:11.234552617 +0000 UTC m=+1293.577378935" watchObservedRunningTime="2025-11-27 17:31:11.249020838 +0000 UTC m=+1293.591847156" Nov 27 17:31:12 crc kubenswrapper[4792]: I1127 17:31:12.204141 4792 generic.go:334] "Generic (PLEG): container finished" podID="ed8ec200-a1d8-482e-b2f3-4d091f491625" containerID="87471452d974abae29e38c57619344a0cf35cecb691ad70c8b7ecc83b5b951e0" exitCode=143 Nov 27 17:31:12 crc kubenswrapper[4792]: I1127 17:31:12.204530 4792 generic.go:334] "Generic (PLEG): container finished" podID="ed8ec200-a1d8-482e-b2f3-4d091f491625" containerID="3c9210275cdd8812ab8fb7e69541b7260a0f570f18cee97f39179fe93b453a65" exitCode=143 Nov 27 17:31:12 crc kubenswrapper[4792]: I1127 17:31:12.204229 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed8ec200-a1d8-482e-b2f3-4d091f491625","Type":"ContainerDied","Data":"87471452d974abae29e38c57619344a0cf35cecb691ad70c8b7ecc83b5b951e0"} Nov 27 17:31:12 crc kubenswrapper[4792]: I1127 17:31:12.204614 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed8ec200-a1d8-482e-b2f3-4d091f491625","Type":"ContainerDied","Data":"3c9210275cdd8812ab8fb7e69541b7260a0f570f18cee97f39179fe93b453a65"} Nov 27 17:31:12 crc kubenswrapper[4792]: I1127 17:31:12.206589 4792 generic.go:334] "Generic (PLEG): container finished" podID="9bddb34e-4227-4435-b964-8c820c84ad4c" containerID="081346cc5ad6b978d5675d4860468b68fc702db31b8fb05002ba8d49b31e1629" exitCode=0 Nov 27 17:31:12 crc kubenswrapper[4792]: I1127 17:31:12.206631 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dlbr2" event={"ID":"9bddb34e-4227-4435-b964-8c820c84ad4c","Type":"ContainerDied","Data":"081346cc5ad6b978d5675d4860468b68fc702db31b8fb05002ba8d49b31e1629"} Nov 27 17:31:12 crc kubenswrapper[4792]: I1127 17:31:12.210240 4792 generic.go:334] "Generic (PLEG): container finished" podID="1c9ffb1d-c10f-4a24-b658-59b9b13228ad" containerID="2917a4a144949afed165f4243ed1f61ed3e06cb860be08318ff030486fbe9dbf" exitCode=143 Nov 27 17:31:12 crc kubenswrapper[4792]: I1127 17:31:12.210261 4792 generic.go:334] "Generic (PLEG): container finished" podID="1c9ffb1d-c10f-4a24-b658-59b9b13228ad" containerID="c6c9b445efdddb1688a71168a2c1e980c0bb93b889c089d08e8ffb5bcb9f588d" exitCode=143 Nov 27 17:31:12 crc kubenswrapper[4792]: I1127 17:31:12.210280 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c9ffb1d-c10f-4a24-b658-59b9b13228ad","Type":"ContainerDied","Data":"2917a4a144949afed165f4243ed1f61ed3e06cb860be08318ff030486fbe9dbf"} Nov 27 17:31:12 crc kubenswrapper[4792]: I1127 17:31:12.210308 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c9ffb1d-c10f-4a24-b658-59b9b13228ad","Type":"ContainerDied","Data":"c6c9b445efdddb1688a71168a2c1e980c0bb93b889c089d08e8ffb5bcb9f588d"} Nov 27 17:31:14 crc kubenswrapper[4792]: I1127 17:31:14.178024 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:14 crc kubenswrapper[4792]: I1127 17:31:14.261741 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-n2w4b"] Nov 27 17:31:14 crc kubenswrapper[4792]: I1127 17:31:14.262008 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" podUID="7c639620-d012-4fb1-851f-2316fb8c51bc" containerName="dnsmasq-dns" containerID="cri-o://1215a5008e45dadb5a11e1ff0ebfd6e23e27ad1a88407570262534ffe95371fd" gracePeriod=10 Nov 27 17:31:15 crc kubenswrapper[4792]: I1127 17:31:15.245117 4792 generic.go:334] "Generic (PLEG): container finished" podID="7c639620-d012-4fb1-851f-2316fb8c51bc" containerID="1215a5008e45dadb5a11e1ff0ebfd6e23e27ad1a88407570262534ffe95371fd" exitCode=0 Nov 27 17:31:15 crc kubenswrapper[4792]: I1127 17:31:15.246362 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" event={"ID":"7c639620-d012-4fb1-851f-2316fb8c51bc","Type":"ContainerDied","Data":"1215a5008e45dadb5a11e1ff0ebfd6e23e27ad1a88407570262534ffe95371fd"} Nov 27 17:31:15 crc kubenswrapper[4792]: I1127 17:31:15.684836 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" podUID="7c639620-d012-4fb1-851f-2316fb8c51bc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: connect: connection refused" Nov 27 17:31:20 crc kubenswrapper[4792]: I1127 17:31:20.683740 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" podUID="7c639620-d012-4fb1-851f-2316fb8c51bc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: connect: connection refused" Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.597158 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.739938 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-credential-keys\") pod \"9bddb34e-4227-4435-b964-8c820c84ad4c\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.740017 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-fernet-keys\") pod \"9bddb34e-4227-4435-b964-8c820c84ad4c\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.740173 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-scripts\") pod \"9bddb34e-4227-4435-b964-8c820c84ad4c\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.740213 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-combined-ca-bundle\") pod \"9bddb34e-4227-4435-b964-8c820c84ad4c\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.740312 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pr5h\" (UniqueName: \"kubernetes.io/projected/9bddb34e-4227-4435-b964-8c820c84ad4c-kube-api-access-4pr5h\") pod \"9bddb34e-4227-4435-b964-8c820c84ad4c\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.740341 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-config-data\") pod \"9bddb34e-4227-4435-b964-8c820c84ad4c\" (UID: \"9bddb34e-4227-4435-b964-8c820c84ad4c\") " Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.759005 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9bddb34e-4227-4435-b964-8c820c84ad4c" (UID: "9bddb34e-4227-4435-b964-8c820c84ad4c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.759037 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9bddb34e-4227-4435-b964-8c820c84ad4c" (UID: "9bddb34e-4227-4435-b964-8c820c84ad4c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.759050 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bddb34e-4227-4435-b964-8c820c84ad4c-kube-api-access-4pr5h" (OuterVolumeSpecName: "kube-api-access-4pr5h") pod "9bddb34e-4227-4435-b964-8c820c84ad4c" (UID: "9bddb34e-4227-4435-b964-8c820c84ad4c"). InnerVolumeSpecName "kube-api-access-4pr5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.759131 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-scripts" (OuterVolumeSpecName: "scripts") pod "9bddb34e-4227-4435-b964-8c820c84ad4c" (UID: "9bddb34e-4227-4435-b964-8c820c84ad4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.808049 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bddb34e-4227-4435-b964-8c820c84ad4c" (UID: "9bddb34e-4227-4435-b964-8c820c84ad4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.808276 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-config-data" (OuterVolumeSpecName: "config-data") pod "9bddb34e-4227-4435-b964-8c820c84ad4c" (UID: "9bddb34e-4227-4435-b964-8c820c84ad4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.843282 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.843316 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.843327 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pr5h\" (UniqueName: \"kubernetes.io/projected/9bddb34e-4227-4435-b964-8c820c84ad4c-kube-api-access-4pr5h\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.843336 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.843344 4792 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:23 crc kubenswrapper[4792]: I1127 17:31:23.843352 4792 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9bddb34e-4227-4435-b964-8c820c84ad4c-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.353641 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dlbr2" event={"ID":"9bddb34e-4227-4435-b964-8c820c84ad4c","Type":"ContainerDied","Data":"f3b2ab91947935d0fef756f31df568279f553e95c476bf0abdbbf4df0908d7c8"} Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.353714 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dlbr2" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.353723 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b2ab91947935d0fef756f31df568279f553e95c476bf0abdbbf4df0908d7c8" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.725492 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dlbr2"] Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.725531 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dlbr2"] Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.800904 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xbj9c"] Nov 27 17:31:24 crc kubenswrapper[4792]: E1127 17:31:24.801328 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a9fa018-11cf-4578-9066-e53129cdd90f" containerName="init" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.801346 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a9fa018-11cf-4578-9066-e53129cdd90f" containerName="init" Nov 27 17:31:24 crc kubenswrapper[4792]: E1127 17:31:24.801368 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bddb34e-4227-4435-b964-8c820c84ad4c" containerName="keystone-bootstrap" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.801376 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bddb34e-4227-4435-b964-8c820c84ad4c" containerName="keystone-bootstrap" Nov 27 17:31:24 crc kubenswrapper[4792]: E1127 17:31:24.801392 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b34f1af-7df6-4102-924b-db57a7d95418" containerName="init" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.801398 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b34f1af-7df6-4102-924b-db57a7d95418" containerName="init" Nov 27 17:31:24 crc kubenswrapper[4792]: E1127 17:31:24.801424 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b34f1af-7df6-4102-924b-db57a7d95418" containerName="dnsmasq-dns" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.801430 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b34f1af-7df6-4102-924b-db57a7d95418" containerName="dnsmasq-dns" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.801617 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bddb34e-4227-4435-b964-8c820c84ad4c" containerName="keystone-bootstrap" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.801628 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b34f1af-7df6-4102-924b-db57a7d95418" containerName="dnsmasq-dns" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.801640 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a9fa018-11cf-4578-9066-e53129cdd90f" containerName="init" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.802361 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.805029 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.805260 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.806035 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p6xs8" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.806102 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.806212 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.811825 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xbj9c"] Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.966239 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-fernet-keys\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.966507 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-config-data\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.966551 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5zpk\" (UniqueName: \"kubernetes.io/projected/477a2f0e-b663-45b8-8541-f4d93f420304-kube-api-access-l5zpk\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.966615 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-scripts\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.966690 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-credential-keys\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:24 crc kubenswrapper[4792]: I1127 17:31:24.966763 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-combined-ca-bundle\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:25 crc kubenswrapper[4792]: I1127 17:31:25.069049 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-fernet-keys\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:25 crc kubenswrapper[4792]: I1127 17:31:25.069104 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-config-data\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:25 crc kubenswrapper[4792]: I1127 17:31:25.069119 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5zpk\" (UniqueName: \"kubernetes.io/projected/477a2f0e-b663-45b8-8541-f4d93f420304-kube-api-access-l5zpk\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:25 crc kubenswrapper[4792]: I1127 17:31:25.069157 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-scripts\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:25 crc kubenswrapper[4792]: I1127 17:31:25.069199 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-credential-keys\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:25 crc kubenswrapper[4792]: I1127 17:31:25.069248 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-combined-ca-bundle\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:25 crc kubenswrapper[4792]: I1127 17:31:25.076188 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-scripts\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:25 crc kubenswrapper[4792]: I1127 17:31:25.077280 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-config-data\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:25 crc kubenswrapper[4792]: I1127 17:31:25.079694 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-credential-keys\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:25 crc kubenswrapper[4792]: I1127 17:31:25.080293 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-fernet-keys\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:25 crc kubenswrapper[4792]: I1127 17:31:25.081366 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-combined-ca-bundle\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:25 crc kubenswrapper[4792]: I1127 17:31:25.092662 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5zpk\" (UniqueName: \"kubernetes.io/projected/477a2f0e-b663-45b8-8541-f4d93f420304-kube-api-access-l5zpk\") pod \"keystone-bootstrap-xbj9c\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:25 crc kubenswrapper[4792]: I1127 17:31:25.159597 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:26 crc kubenswrapper[4792]: I1127 17:31:26.699213 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bddb34e-4227-4435-b964-8c820c84ad4c" path="/var/lib/kubelet/pods/9bddb34e-4227-4435-b964-8c820c84ad4c/volumes" Nov 27 17:31:29 crc kubenswrapper[4792]: I1127 17:31:29.409408 4792 generic.go:334] "Generic (PLEG): container finished" podID="bacfd25c-6929-437c-887b-02b5b3f33b1e" containerID="c722c74221b9264a985633f99d530fadbf5f4267f6a2a18c515ed968d61c7a09" exitCode=0 Nov 27 17:31:29 crc kubenswrapper[4792]: I1127 17:31:29.409803 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nnctq" event={"ID":"bacfd25c-6929-437c-887b-02b5b3f33b1e","Type":"ContainerDied","Data":"c722c74221b9264a985633f99d530fadbf5f4267f6a2a18c515ed968d61c7a09"} Nov 27 17:31:30 crc kubenswrapper[4792]: I1127 17:31:30.683379 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" podUID="7c639620-d012-4fb1-851f-2316fb8c51bc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Nov 27 17:31:30 crc kubenswrapper[4792]: I1127 17:31:30.683846 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:31:30 crc kubenswrapper[4792]: I1127 17:31:30.961793 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.116829 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-config-data\") pod \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.116997 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-httpd-run\") pod \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.117087 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-scripts\") pod \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.117131 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.117230 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-logs\") pod \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.117274 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-public-tls-certs\") pod \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.117309 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-combined-ca-bundle\") pod \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.117384 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hhc2\" (UniqueName: \"kubernetes.io/projected/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-kube-api-access-2hhc2\") pod \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\" (UID: \"1c9ffb1d-c10f-4a24-b658-59b9b13228ad\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.117557 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-logs" (OuterVolumeSpecName: "logs") pod "1c9ffb1d-c10f-4a24-b658-59b9b13228ad" (UID: "1c9ffb1d-c10f-4a24-b658-59b9b13228ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.117918 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-logs\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.117943 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1c9ffb1d-c10f-4a24-b658-59b9b13228ad" (UID: "1c9ffb1d-c10f-4a24-b658-59b9b13228ad"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.123097 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-scripts" (OuterVolumeSpecName: "scripts") pod "1c9ffb1d-c10f-4a24-b658-59b9b13228ad" (UID: "1c9ffb1d-c10f-4a24-b658-59b9b13228ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.124149 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "1c9ffb1d-c10f-4a24-b658-59b9b13228ad" (UID: "1c9ffb1d-c10f-4a24-b658-59b9b13228ad"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.142746 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-kube-api-access-2hhc2" (OuterVolumeSpecName: "kube-api-access-2hhc2") pod "1c9ffb1d-c10f-4a24-b658-59b9b13228ad" (UID: "1c9ffb1d-c10f-4a24-b658-59b9b13228ad"). InnerVolumeSpecName "kube-api-access-2hhc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.190558 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c9ffb1d-c10f-4a24-b658-59b9b13228ad" (UID: "1c9ffb1d-c10f-4a24-b658-59b9b13228ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.214881 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-config-data" (OuterVolumeSpecName: "config-data") pod "1c9ffb1d-c10f-4a24-b658-59b9b13228ad" (UID: "1c9ffb1d-c10f-4a24-b658-59b9b13228ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.220356 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.220389 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.220398 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.220427 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.220437 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.220447 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hhc2\" (UniqueName: \"kubernetes.io/projected/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-kube-api-access-2hhc2\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.227938 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1c9ffb1d-c10f-4a24-b658-59b9b13228ad" (UID: "1c9ffb1d-c10f-4a24-b658-59b9b13228ad"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.250846 4792 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.324429 4792 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.324472 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c9ffb1d-c10f-4a24-b658-59b9b13228ad-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: E1127 17:31:31.387794 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 27 17:31:31 crc kubenswrapper[4792]: E1127 17:31:31.387993 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9ch5f4h595hfbh66fh5ffh674h666h7dh67fh8ch688h58h655h569h74h595h57ch674hf6h676h56ch58ch97h58bh59h568h5cch59bh95h675h544q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-thcrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3efaa573-2d1c-4668-a6bc-b50aa892a299): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.394480 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.443671 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1c9ffb1d-c10f-4a24-b658-59b9b13228ad","Type":"ContainerDied","Data":"e599c4d1a49e7a04f2aa94549d142369f7605010ca32987853b3706791ffd400"} Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.443734 4792 scope.go:117] "RemoveContainer" containerID="2917a4a144949afed165f4243ed1f61ed3e06cb860be08318ff030486fbe9dbf" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.443902 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.448325 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed8ec200-a1d8-482e-b2f3-4d091f491625","Type":"ContainerDied","Data":"3fe04918c9adf3db965967b62d8a4fc3b1032b27d61a03f31fe905204aa2413c"} Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.448366 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.486324 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.521325 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.529977 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-scripts\") pod \"ed8ec200-a1d8-482e-b2f3-4d091f491625\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.530052 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-combined-ca-bundle\") pod \"ed8ec200-a1d8-482e-b2f3-4d091f491625\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.530206 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ed8ec200-a1d8-482e-b2f3-4d091f491625\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.530263 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-config-data\") pod \"ed8ec200-a1d8-482e-b2f3-4d091f491625\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.530302 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed8ec200-a1d8-482e-b2f3-4d091f491625-logs\") pod \"ed8ec200-a1d8-482e-b2f3-4d091f491625\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.530399 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgv8r\" (UniqueName: \"kubernetes.io/projected/ed8ec200-a1d8-482e-b2f3-4d091f491625-kube-api-access-hgv8r\") pod \"ed8ec200-a1d8-482e-b2f3-4d091f491625\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.530480 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed8ec200-a1d8-482e-b2f3-4d091f491625-httpd-run\") pod \"ed8ec200-a1d8-482e-b2f3-4d091f491625\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.530504 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-internal-tls-certs\") pod \"ed8ec200-a1d8-482e-b2f3-4d091f491625\" (UID: \"ed8ec200-a1d8-482e-b2f3-4d091f491625\") " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.533906 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 27 17:31:31 crc kubenswrapper[4792]: E1127 17:31:31.534407 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9ffb1d-c10f-4a24-b658-59b9b13228ad" containerName="glance-httpd" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.534424 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9ffb1d-c10f-4a24-b658-59b9b13228ad" containerName="glance-httpd" Nov 27 17:31:31 crc kubenswrapper[4792]: E1127 17:31:31.534433 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9ffb1d-c10f-4a24-b658-59b9b13228ad" containerName="glance-log" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.534439 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9ffb1d-c10f-4a24-b658-59b9b13228ad" containerName="glance-log" Nov 27 17:31:31 crc kubenswrapper[4792]: E1127 17:31:31.534468 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8ec200-a1d8-482e-b2f3-4d091f491625" containerName="glance-log" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.534475 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8ec200-a1d8-482e-b2f3-4d091f491625" containerName="glance-log" Nov 27 17:31:31 crc kubenswrapper[4792]: E1127 17:31:31.534494 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8ec200-a1d8-482e-b2f3-4d091f491625" containerName="glance-httpd" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.534499 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8ec200-a1d8-482e-b2f3-4d091f491625" containerName="glance-httpd" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.534701 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9ffb1d-c10f-4a24-b658-59b9b13228ad" containerName="glance-httpd" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.534719 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed8ec200-a1d8-482e-b2f3-4d091f491625" containerName="glance-log" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.534740 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9ffb1d-c10f-4a24-b658-59b9b13228ad" containerName="glance-log" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.534752 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed8ec200-a1d8-482e-b2f3-4d091f491625" containerName="glance-httpd" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.536004 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed8ec200-a1d8-482e-b2f3-4d091f491625-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ed8ec200-a1d8-482e-b2f3-4d091f491625" (UID: "ed8ec200-a1d8-482e-b2f3-4d091f491625"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.536273 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed8ec200-a1d8-482e-b2f3-4d091f491625-logs" (OuterVolumeSpecName: "logs") pod "ed8ec200-a1d8-482e-b2f3-4d091f491625" (UID: "ed8ec200-a1d8-482e-b2f3-4d091f491625"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.537635 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-scripts" (OuterVolumeSpecName: "scripts") pod "ed8ec200-a1d8-482e-b2f3-4d091f491625" (UID: "ed8ec200-a1d8-482e-b2f3-4d091f491625"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.538974 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed8ec200-a1d8-482e-b2f3-4d091f491625-kube-api-access-hgv8r" (OuterVolumeSpecName: "kube-api-access-hgv8r") pod "ed8ec200-a1d8-482e-b2f3-4d091f491625" (UID: "ed8ec200-a1d8-482e-b2f3-4d091f491625"). InnerVolumeSpecName "kube-api-access-hgv8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.539653 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "ed8ec200-a1d8-482e-b2f3-4d091f491625" (UID: "ed8ec200-a1d8-482e-b2f3-4d091f491625"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.540272 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.548968 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.549254 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.580698 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed8ec200-a1d8-482e-b2f3-4d091f491625" (UID: "ed8ec200-a1d8-482e-b2f3-4d091f491625"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.588305 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.624346 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ed8ec200-a1d8-482e-b2f3-4d091f491625" (UID: "ed8ec200-a1d8-482e-b2f3-4d091f491625"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.632936 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-scripts\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.633011 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-config-data\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.633085 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ce88c22-48ff-4c20-a73e-27324f35f70d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.633115 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.633385 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx6h4\" (UniqueName: \"kubernetes.io/projected/6ce88c22-48ff-4c20-a73e-27324f35f70d-kube-api-access-jx6h4\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.633439 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ce88c22-48ff-4c20-a73e-27324f35f70d-logs\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.633487 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.633534 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.633738 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgv8r\" (UniqueName: \"kubernetes.io/projected/ed8ec200-a1d8-482e-b2f3-4d091f491625-kube-api-access-hgv8r\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.633762 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed8ec200-a1d8-482e-b2f3-4d091f491625-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.633772 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.633781 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.633789 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.633811 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.633820 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed8ec200-a1d8-482e-b2f3-4d091f491625-logs\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.635180 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-config-data" (OuterVolumeSpecName: "config-data") pod "ed8ec200-a1d8-482e-b2f3-4d091f491625" (UID: "ed8ec200-a1d8-482e-b2f3-4d091f491625"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.658892 4792 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.735332 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx6h4\" (UniqueName: \"kubernetes.io/projected/6ce88c22-48ff-4c20-a73e-27324f35f70d-kube-api-access-jx6h4\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.735380 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ce88c22-48ff-4c20-a73e-27324f35f70d-logs\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.735410 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.735437 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.735476 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-scripts\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.735517 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-config-data\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.735556 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ce88c22-48ff-4c20-a73e-27324f35f70d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.735577 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.735679 4792 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.735692 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8ec200-a1d8-482e-b2f3-4d091f491625-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.735811 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.736796 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ce88c22-48ff-4c20-a73e-27324f35f70d-logs\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.736889 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ce88c22-48ff-4c20-a73e-27324f35f70d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.739815 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-scripts\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.739891 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.743816 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-config-data\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.745185 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.752979 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx6h4\" (UniqueName: \"kubernetes.io/projected/6ce88c22-48ff-4c20-a73e-27324f35f70d-kube-api-access-jx6h4\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.768061 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.789963 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.804797 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.816521 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.818478 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.822028 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.822211 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.848060 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.860203 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: E1127 17:31:31.916742 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 27 17:31:31 crc kubenswrapper[4792]: E1127 17:31:31.916895 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnrvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-rr9ql_openstack(1e3f2e74-1077-4a57-9851-1113b4a46729): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:31:31 crc kubenswrapper[4792]: E1127 17:31:31.918405 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-rr9ql" podUID="1e3f2e74-1077-4a57-9851-1113b4a46729" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.941776 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce42d3e2-e953-4283-81f3-855bfb27fd10-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.942149 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.942184 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce42d3e2-e953-4283-81f3-855bfb27fd10-logs\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.942209 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.942232 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.942273 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.942299 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:31 crc kubenswrapper[4792]: I1127 17:31:31.942375 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rxjm\" (UniqueName: \"kubernetes.io/projected/ce42d3e2-e953-4283-81f3-855bfb27fd10-kube-api-access-9rxjm\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.006930 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.022377 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nnctq" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.044859 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rxjm\" (UniqueName: \"kubernetes.io/projected/ce42d3e2-e953-4283-81f3-855bfb27fd10-kube-api-access-9rxjm\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.045036 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce42d3e2-e953-4283-81f3-855bfb27fd10-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.045098 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.045131 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce42d3e2-e953-4283-81f3-855bfb27fd10-logs\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.045163 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.045190 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.045237 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.045267 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.046512 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce42d3e2-e953-4283-81f3-855bfb27fd10-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.046610 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce42d3e2-e953-4283-81f3-855bfb27fd10-logs\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.047305 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.053905 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.054522 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.054822 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.056893 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.072386 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rxjm\" (UniqueName: \"kubernetes.io/projected/ce42d3e2-e953-4283-81f3-855bfb27fd10-kube-api-access-9rxjm\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.130263 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.146973 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-config\") pod \"7c639620-d012-4fb1-851f-2316fb8c51bc\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.147138 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-dns-svc\") pod \"7c639620-d012-4fb1-851f-2316fb8c51bc\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.147202 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-ovsdbserver-sb\") pod \"7c639620-d012-4fb1-851f-2316fb8c51bc\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.147262 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5bqs\" (UniqueName: \"kubernetes.io/projected/bacfd25c-6929-437c-887b-02b5b3f33b1e-kube-api-access-v5bqs\") pod \"bacfd25c-6929-437c-887b-02b5b3f33b1e\" (UID: \"bacfd25c-6929-437c-887b-02b5b3f33b1e\") " Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.147347 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trtf6\" (UniqueName: \"kubernetes.io/projected/7c639620-d012-4fb1-851f-2316fb8c51bc-kube-api-access-trtf6\") pod \"7c639620-d012-4fb1-851f-2316fb8c51bc\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.147393 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bacfd25c-6929-437c-887b-02b5b3f33b1e-config\") pod \"bacfd25c-6929-437c-887b-02b5b3f33b1e\" (UID: \"bacfd25c-6929-437c-887b-02b5b3f33b1e\") " Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.147450 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-ovsdbserver-nb\") pod \"7c639620-d012-4fb1-851f-2316fb8c51bc\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.147485 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-dns-swift-storage-0\") pod \"7c639620-d012-4fb1-851f-2316fb8c51bc\" (UID: \"7c639620-d012-4fb1-851f-2316fb8c51bc\") " Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.147538 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacfd25c-6929-437c-887b-02b5b3f33b1e-combined-ca-bundle\") pod \"bacfd25c-6929-437c-887b-02b5b3f33b1e\" (UID: \"bacfd25c-6929-437c-887b-02b5b3f33b1e\") " Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.153193 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bacfd25c-6929-437c-887b-02b5b3f33b1e-kube-api-access-v5bqs" (OuterVolumeSpecName: "kube-api-access-v5bqs") pod "bacfd25c-6929-437c-887b-02b5b3f33b1e" (UID: "bacfd25c-6929-437c-887b-02b5b3f33b1e"). InnerVolumeSpecName "kube-api-access-v5bqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.153335 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c639620-d012-4fb1-851f-2316fb8c51bc-kube-api-access-trtf6" (OuterVolumeSpecName: "kube-api-access-trtf6") pod "7c639620-d012-4fb1-851f-2316fb8c51bc" (UID: "7c639620-d012-4fb1-851f-2316fb8c51bc"). InnerVolumeSpecName "kube-api-access-trtf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.182501 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bacfd25c-6929-437c-887b-02b5b3f33b1e-config" (OuterVolumeSpecName: "config") pod "bacfd25c-6929-437c-887b-02b5b3f33b1e" (UID: "bacfd25c-6929-437c-887b-02b5b3f33b1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.190291 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bacfd25c-6929-437c-887b-02b5b3f33b1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bacfd25c-6929-437c-887b-02b5b3f33b1e" (UID: "bacfd25c-6929-437c-887b-02b5b3f33b1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.206903 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c639620-d012-4fb1-851f-2316fb8c51bc" (UID: "7c639620-d012-4fb1-851f-2316fb8c51bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.207996 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c639620-d012-4fb1-851f-2316fb8c51bc" (UID: "7c639620-d012-4fb1-851f-2316fb8c51bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.209145 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7c639620-d012-4fb1-851f-2316fb8c51bc" (UID: "7c639620-d012-4fb1-851f-2316fb8c51bc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.210146 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-config" (OuterVolumeSpecName: "config") pod "7c639620-d012-4fb1-851f-2316fb8c51bc" (UID: "7c639620-d012-4fb1-851f-2316fb8c51bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.210448 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c639620-d012-4fb1-851f-2316fb8c51bc" (UID: "7c639620-d012-4fb1-851f-2316fb8c51bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.250105 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.250149 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.250162 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.250175 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5bqs\" (UniqueName: \"kubernetes.io/projected/bacfd25c-6929-437c-887b-02b5b3f33b1e-kube-api-access-v5bqs\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.250190 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trtf6\" (UniqueName: \"kubernetes.io/projected/7c639620-d012-4fb1-851f-2316fb8c51bc-kube-api-access-trtf6\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.250200 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bacfd25c-6929-437c-887b-02b5b3f33b1e-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.250211 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.250222 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c639620-d012-4fb1-851f-2316fb8c51bc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.250233 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacfd25c-6929-437c-887b-02b5b3f33b1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.299335 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.464341 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nnctq" event={"ID":"bacfd25c-6929-437c-887b-02b5b3f33b1e","Type":"ContainerDied","Data":"0d48279ff20a19fece2a5e5cb24e79aac9eb95651a132b35f114e7f45d345a8a"} Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.464385 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d48279ff20a19fece2a5e5cb24e79aac9eb95651a132b35f114e7f45d345a8a" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.464445 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nnctq" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.474253 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.474372 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" event={"ID":"7c639620-d012-4fb1-851f-2316fb8c51bc","Type":"ContainerDied","Data":"64353dea212382a7770c00de5aa3c546b72c59768a36411a9ad099cc25979b4e"} Nov 27 17:31:32 crc kubenswrapper[4792]: E1127 17:31:32.476447 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-rr9ql" podUID="1e3f2e74-1077-4a57-9851-1113b4a46729" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.526326 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-n2w4b"] Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.535956 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-n2w4b"] Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.701352 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c9ffb1d-c10f-4a24-b658-59b9b13228ad" path="/var/lib/kubelet/pods/1c9ffb1d-c10f-4a24-b658-59b9b13228ad/volumes" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.702552 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c639620-d012-4fb1-851f-2316fb8c51bc" path="/var/lib/kubelet/pods/7c639620-d012-4fb1-851f-2316fb8c51bc/volumes" Nov 27 17:31:32 crc kubenswrapper[4792]: I1127 17:31:32.703384 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed8ec200-a1d8-482e-b2f3-4d091f491625" path="/var/lib/kubelet/pods/ed8ec200-a1d8-482e-b2f3-4d091f491625/volumes" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.267307 4792 scope.go:117] "RemoveContainer" containerID="c6c9b445efdddb1688a71168a2c1e980c0bb93b889c089d08e8ffb5bcb9f588d" Nov 27 17:31:33 crc kubenswrapper[4792]: E1127 17:31:33.310906 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 27 17:31:33 crc kubenswrapper[4792]: E1127 17:31:33.311202 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sg2wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-vcdv9_openstack(2f1c2409-1610-4ede-ab33-880b170c802f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:31:33 crc kubenswrapper[4792]: E1127 17:31:33.313078 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-vcdv9" podUID="2f1c2409-1610-4ede-ab33-880b170c802f" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.314036 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-9rdwk"] Nov 27 17:31:33 crc kubenswrapper[4792]: E1127 17:31:33.316861 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bacfd25c-6929-437c-887b-02b5b3f33b1e" containerName="neutron-db-sync" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.316886 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bacfd25c-6929-437c-887b-02b5b3f33b1e" containerName="neutron-db-sync" Nov 27 17:31:33 crc kubenswrapper[4792]: E1127 17:31:33.316931 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c639620-d012-4fb1-851f-2316fb8c51bc" containerName="init" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.316937 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c639620-d012-4fb1-851f-2316fb8c51bc" containerName="init" Nov 27 17:31:33 crc kubenswrapper[4792]: E1127 17:31:33.316954 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c639620-d012-4fb1-851f-2316fb8c51bc" containerName="dnsmasq-dns" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.316960 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c639620-d012-4fb1-851f-2316fb8c51bc" containerName="dnsmasq-dns" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.317184 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="bacfd25c-6929-437c-887b-02b5b3f33b1e" containerName="neutron-db-sync" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.317197 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c639620-d012-4fb1-851f-2316fb8c51bc" containerName="dnsmasq-dns" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.318332 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.387196 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.387260 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfjfd\" (UniqueName: \"kubernetes.io/projected/7b749958-4e65-4302-a52d-0a1ec095ba72-kube-api-access-dfjfd\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.387295 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.387347 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-config\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.387400 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.387423 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.388414 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-9rdwk"] Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.454433 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-774c69c88b-662kt"] Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.456841 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.459953 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.460050 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-st6rt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.460182 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.462723 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.467011 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-774c69c88b-662kt"] Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.490060 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st77w\" (UniqueName: \"kubernetes.io/projected/399bbc40-3013-43ce-9de7-72105e209540-kube-api-access-st77w\") pod \"neutron-774c69c88b-662kt\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.490125 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.490152 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-config\") pod \"neutron-774c69c88b-662kt\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.490185 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfjfd\" (UniqueName: \"kubernetes.io/projected/7b749958-4e65-4302-a52d-0a1ec095ba72-kube-api-access-dfjfd\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.490210 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.490234 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-ovndb-tls-certs\") pod \"neutron-774c69c88b-662kt\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.490269 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-combined-ca-bundle\") pod \"neutron-774c69c88b-662kt\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.490304 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-config\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.490363 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.490384 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.490419 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-httpd-config\") pod \"neutron-774c69c88b-662kt\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.491819 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-config\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.491864 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.492802 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.494225 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.494572 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.523991 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfjfd\" (UniqueName: \"kubernetes.io/projected/7b749958-4e65-4302-a52d-0a1ec095ba72-kube-api-access-dfjfd\") pod \"dnsmasq-dns-5ccc5c4795-9rdwk\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: E1127 17:31:33.541938 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-vcdv9" podUID="2f1c2409-1610-4ede-ab33-880b170c802f" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.542157 4792 scope.go:117] "RemoveContainer" containerID="87471452d974abae29e38c57619344a0cf35cecb691ad70c8b7ecc83b5b951e0" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.592746 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st77w\" (UniqueName: \"kubernetes.io/projected/399bbc40-3013-43ce-9de7-72105e209540-kube-api-access-st77w\") pod \"neutron-774c69c88b-662kt\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.592994 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-config\") pod \"neutron-774c69c88b-662kt\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.593094 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-ovndb-tls-certs\") pod \"neutron-774c69c88b-662kt\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.593113 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-combined-ca-bundle\") pod \"neutron-774c69c88b-662kt\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.593247 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-httpd-config\") pod \"neutron-774c69c88b-662kt\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.599567 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-combined-ca-bundle\") pod \"neutron-774c69c88b-662kt\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.602002 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-httpd-config\") pod \"neutron-774c69c88b-662kt\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.608926 4792 scope.go:117] "RemoveContainer" containerID="3c9210275cdd8812ab8fb7e69541b7260a0f570f18cee97f39179fe93b453a65" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.613873 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-ovndb-tls-certs\") pod \"neutron-774c69c88b-662kt\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.614144 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-config\") pod \"neutron-774c69c88b-662kt\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.626179 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st77w\" (UniqueName: \"kubernetes.io/projected/399bbc40-3013-43ce-9de7-72105e209540-kube-api-access-st77w\") pod \"neutron-774c69c88b-662kt\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.649600 4792 scope.go:117] "RemoveContainer" containerID="1215a5008e45dadb5a11e1ff0ebfd6e23e27ad1a88407570262534ffe95371fd" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.727798 4792 scope.go:117] "RemoveContainer" containerID="42d79b9f9cfa1094aa884431cb1249fd69009b9c5a2a21d5fada65edb159d8de" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.800498 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:33 crc kubenswrapper[4792]: I1127 17:31:33.821206 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:34 crc kubenswrapper[4792]: I1127 17:31:34.029163 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xbj9c"] Nov 27 17:31:34 crc kubenswrapper[4792]: I1127 17:31:34.133516 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 27 17:31:34 crc kubenswrapper[4792]: I1127 17:31:34.363525 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-9rdwk"] Nov 27 17:31:34 crc kubenswrapper[4792]: W1127 17:31:34.380083 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b749958_4e65_4302_a52d_0a1ec095ba72.slice/crio-01a6c3d662a43ba8d3c8b42abcb3e81f9e1d199c6412a15f3cebdab97b055df3 WatchSource:0}: Error finding container 01a6c3d662a43ba8d3c8b42abcb3e81f9e1d199c6412a15f3cebdab97b055df3: Status 404 returned error can't find the container with id 01a6c3d662a43ba8d3c8b42abcb3e81f9e1d199c6412a15f3cebdab97b055df3 Nov 27 17:31:34 crc kubenswrapper[4792]: I1127 17:31:34.542381 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4xhxm" event={"ID":"b22e78cf-a700-4a68-8b79-fdb0dc988a04","Type":"ContainerStarted","Data":"ffc4651c0ee50b61bc6b7e81cb27476ff6cbccd8a03264d28e8e4b6f1db7a23a"} Nov 27 17:31:34 crc kubenswrapper[4792]: I1127 17:31:34.544303 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ce42d3e2-e953-4283-81f3-855bfb27fd10","Type":"ContainerStarted","Data":"58b16adbc8ba86185fc5b702b8d020e3a0245c75ad4a86a848104c801958dddb"} Nov 27 17:31:34 crc kubenswrapper[4792]: I1127 17:31:34.550517 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" event={"ID":"7b749958-4e65-4302-a52d-0a1ec095ba72","Type":"ContainerStarted","Data":"01a6c3d662a43ba8d3c8b42abcb3e81f9e1d199c6412a15f3cebdab97b055df3"} Nov 27 17:31:34 crc kubenswrapper[4792]: I1127 17:31:34.554500 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xbj9c" event={"ID":"477a2f0e-b663-45b8-8541-f4d93f420304","Type":"ContainerStarted","Data":"037fb098f31f6a09beaa718f109979106939898206e2ce54821f4e321ed1ba01"} Nov 27 17:31:34 crc kubenswrapper[4792]: I1127 17:31:34.558156 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4xhxm" podStartSLOduration=3.383110424 podStartE2EDuration="31.558143634s" podCreationTimestamp="2025-11-27 17:31:03 +0000 UTC" firstStartedPulling="2025-11-27 17:31:05.030788898 +0000 UTC m=+1287.373615216" lastFinishedPulling="2025-11-27 17:31:33.205822108 +0000 UTC m=+1315.548648426" observedRunningTime="2025-11-27 17:31:34.556267537 +0000 UTC m=+1316.899093855" watchObservedRunningTime="2025-11-27 17:31:34.558143634 +0000 UTC m=+1316.900969942" Nov 27 17:31:34 crc kubenswrapper[4792]: I1127 17:31:34.573890 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-658m9" event={"ID":"821eeff5-b48a-4380-986e-9a9f3bb929eb","Type":"ContainerStarted","Data":"eb1bed6ce26871f6ff33f38050f33b8dc0a72ec45befbead0765b55690bb999e"} Nov 27 17:31:34 crc kubenswrapper[4792]: I1127 17:31:34.606259 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-658m9" podStartSLOduration=3.556109824 podStartE2EDuration="31.606244133s" podCreationTimestamp="2025-11-27 17:31:03 +0000 UTC" firstStartedPulling="2025-11-27 17:31:05.173093183 +0000 UTC m=+1287.515919501" lastFinishedPulling="2025-11-27 17:31:33.223227492 +0000 UTC m=+1315.566053810" observedRunningTime="2025-11-27 17:31:34.59730652 +0000 UTC m=+1316.940132838" watchObservedRunningTime="2025-11-27 17:31:34.606244133 +0000 UTC m=+1316.949070441" Nov 27 17:31:34 crc kubenswrapper[4792]: I1127 17:31:34.707487 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-774c69c88b-662kt"] Nov 27 17:31:35 crc kubenswrapper[4792]: I1127 17:31:35.129418 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 27 17:31:35 crc kubenswrapper[4792]: I1127 17:31:35.671292 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ce42d3e2-e953-4283-81f3-855bfb27fd10","Type":"ContainerStarted","Data":"8399dcd499d9d6956b5bc3da39762b59f7f9d653bc63bf74d14a1350a226326e"} Nov 27 17:31:35 crc kubenswrapper[4792]: I1127 17:31:35.675013 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ce88c22-48ff-4c20-a73e-27324f35f70d","Type":"ContainerStarted","Data":"024bf9be0562df9c440626675994c586ffd2a8bd27d830af3246fa2bf535c66f"} Nov 27 17:31:35 crc kubenswrapper[4792]: I1127 17:31:35.677544 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-774c69c88b-662kt" event={"ID":"399bbc40-3013-43ce-9de7-72105e209540","Type":"ContainerStarted","Data":"2d68ce3656992f54d0bfd29f581151cbf7a30398c7933a218495c742bd89cf1c"} Nov 27 17:31:35 crc kubenswrapper[4792]: I1127 17:31:35.677572 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-774c69c88b-662kt" event={"ID":"399bbc40-3013-43ce-9de7-72105e209540","Type":"ContainerStarted","Data":"0dd67ad510e113695369a3a582936b17343f84fa72dbb787bc23f4f90dddff4b"} Nov 27 17:31:35 crc kubenswrapper[4792]: I1127 17:31:35.679151 4792 generic.go:334] "Generic (PLEG): container finished" podID="7b749958-4e65-4302-a52d-0a1ec095ba72" containerID="0255dea69c99c7bb02be61da15ea9700c47a9a169c45ef1d66f0bae5e682176c" exitCode=0 Nov 27 17:31:35 crc kubenswrapper[4792]: I1127 17:31:35.679893 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" event={"ID":"7b749958-4e65-4302-a52d-0a1ec095ba72","Type":"ContainerDied","Data":"0255dea69c99c7bb02be61da15ea9700c47a9a169c45ef1d66f0bae5e682176c"} Nov 27 17:31:35 crc kubenswrapper[4792]: I1127 17:31:35.686118 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-n2w4b" podUID="7c639620-d012-4fb1-851f-2316fb8c51bc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Nov 27 17:31:35 crc kubenswrapper[4792]: I1127 17:31:35.687031 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xbj9c" event={"ID":"477a2f0e-b663-45b8-8541-f4d93f420304","Type":"ContainerStarted","Data":"bdb7e5400f0d3628a2149c8d4d991b96cd85b6980fc6794084a1d6301e9320be"} Nov 27 17:31:35 crc kubenswrapper[4792]: I1127 17:31:35.729042 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xbj9c" podStartSLOduration=11.729024057 podStartE2EDuration="11.729024057s" podCreationTimestamp="2025-11-27 17:31:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:35.716002723 +0000 UTC m=+1318.058829041" watchObservedRunningTime="2025-11-27 17:31:35.729024057 +0000 UTC m=+1318.071850375" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.386386 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5fff8f565-9t8rn"] Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.388856 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.395255 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.395433 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.426408 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fff8f565-9t8rn"] Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.484588 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-ovndb-tls-certs\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.484663 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-combined-ca-bundle\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.484705 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-public-tls-certs\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.484889 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-internal-tls-certs\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.484917 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-config\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.484940 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjhbb\" (UniqueName: \"kubernetes.io/projected/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-kube-api-access-fjhbb\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.485026 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-httpd-config\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.586960 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-httpd-config\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.587089 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-ovndb-tls-certs\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.587119 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-combined-ca-bundle\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.587143 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-public-tls-certs\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.587212 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-internal-tls-certs\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.587231 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjhbb\" (UniqueName: \"kubernetes.io/projected/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-kube-api-access-fjhbb\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.587247 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-config\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.592754 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-public-tls-certs\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.593124 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-combined-ca-bundle\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.595583 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-httpd-config\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.596330 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-config\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.599815 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-internal-tls-certs\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.600316 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-ovndb-tls-certs\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.613201 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjhbb\" (UniqueName: \"kubernetes.io/projected/1972fb8a-2570-4dd8-8ae1-b3fccf229e4b-kube-api-access-fjhbb\") pod \"neutron-5fff8f565-9t8rn\" (UID: \"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b\") " pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.712548 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ce42d3e2-e953-4283-81f3-855bfb27fd10","Type":"ContainerStarted","Data":"b7375688969c273dfa3d2439ca47b14b4c2e8a1b45b1476bf1bfdc61158fceef"} Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.713923 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ce88c22-48ff-4c20-a73e-27324f35f70d","Type":"ContainerStarted","Data":"6a257ca868ab0152cd7d8dcb1dca62d0a957f0eb7dcfce7952e2a079a36fd21d"} Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.716464 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-774c69c88b-662kt" event={"ID":"399bbc40-3013-43ce-9de7-72105e209540","Type":"ContainerStarted","Data":"6e96d4e724162e8035294801e76346f4784eb5824586843ff67885d658e71b68"} Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.717360 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.722757 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.722734125 podStartE2EDuration="5.722734125s" podCreationTimestamp="2025-11-27 17:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:36.718044418 +0000 UTC m=+1319.060870736" watchObservedRunningTime="2025-11-27 17:31:36.722734125 +0000 UTC m=+1319.065560443" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.729562 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" event={"ID":"7b749958-4e65-4302-a52d-0a1ec095ba72","Type":"ContainerStarted","Data":"a12eb15d3a62410218e76365d28b9a7c1f379b12621618a975e7ba013b36545a"} Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.729737 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.732776 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3efaa573-2d1c-4668-a6bc-b50aa892a299","Type":"ContainerStarted","Data":"1fa04bca7b7b67f52cacec4043f6fe71696ef5ffde7c1d1171253a1af71444bf"} Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.757967 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-774c69c88b-662kt" podStartSLOduration=3.757949683 podStartE2EDuration="3.757949683s" podCreationTimestamp="2025-11-27 17:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:36.746415635 +0000 UTC m=+1319.089241953" watchObservedRunningTime="2025-11-27 17:31:36.757949683 +0000 UTC m=+1319.100776001" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.770620 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:36 crc kubenswrapper[4792]: I1127 17:31:36.788337 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" podStartSLOduration=3.78831622 podStartE2EDuration="3.78831622s" podCreationTimestamp="2025-11-27 17:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:36.762923177 +0000 UTC m=+1319.105749495" watchObservedRunningTime="2025-11-27 17:31:36.78831622 +0000 UTC m=+1319.131142538" Nov 27 17:31:37 crc kubenswrapper[4792]: I1127 17:31:37.443113 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fff8f565-9t8rn"] Nov 27 17:31:37 crc kubenswrapper[4792]: W1127 17:31:37.455682 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1972fb8a_2570_4dd8_8ae1_b3fccf229e4b.slice/crio-22693f286f6ca4c9ea935d5b063cd0fb4c1ba2173a93f57c6c7951958ceece07 WatchSource:0}: Error finding container 22693f286f6ca4c9ea935d5b063cd0fb4c1ba2173a93f57c6c7951958ceece07: Status 404 returned error can't find the container with id 22693f286f6ca4c9ea935d5b063cd0fb4c1ba2173a93f57c6c7951958ceece07 Nov 27 17:31:37 crc kubenswrapper[4792]: I1127 17:31:37.747473 4792 generic.go:334] "Generic (PLEG): container finished" podID="821eeff5-b48a-4380-986e-9a9f3bb929eb" containerID="eb1bed6ce26871f6ff33f38050f33b8dc0a72ec45befbead0765b55690bb999e" exitCode=0 Nov 27 17:31:37 crc kubenswrapper[4792]: I1127 17:31:37.747629 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-658m9" event={"ID":"821eeff5-b48a-4380-986e-9a9f3bb929eb","Type":"ContainerDied","Data":"eb1bed6ce26871f6ff33f38050f33b8dc0a72ec45befbead0765b55690bb999e"} Nov 27 17:31:37 crc kubenswrapper[4792]: I1127 17:31:37.755926 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ce88c22-48ff-4c20-a73e-27324f35f70d","Type":"ContainerStarted","Data":"746ac7acbc18a256ac4ca5c59c24d6225888e2d1e15ee7044e493b4ff783937b"} Nov 27 17:31:37 crc kubenswrapper[4792]: I1127 17:31:37.760468 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fff8f565-9t8rn" event={"ID":"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b","Type":"ContainerStarted","Data":"22693f286f6ca4c9ea935d5b063cd0fb4c1ba2173a93f57c6c7951958ceece07"} Nov 27 17:31:37 crc kubenswrapper[4792]: I1127 17:31:37.794979 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.794938679 podStartE2EDuration="6.794938679s" podCreationTimestamp="2025-11-27 17:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:37.783286569 +0000 UTC m=+1320.126112887" watchObservedRunningTime="2025-11-27 17:31:37.794938679 +0000 UTC m=+1320.137764997" Nov 27 17:31:38 crc kubenswrapper[4792]: I1127 17:31:38.290028 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:31:38 crc kubenswrapper[4792]: I1127 17:31:38.290344 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:31:38 crc kubenswrapper[4792]: I1127 17:31:38.290394 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:31:38 crc kubenswrapper[4792]: I1127 17:31:38.291253 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96c8b617d1cd650967466a2e285f319ed4525e9b0567767b82907caf8e1a4e24"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 17:31:38 crc kubenswrapper[4792]: I1127 17:31:38.291311 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://96c8b617d1cd650967466a2e285f319ed4525e9b0567767b82907caf8e1a4e24" gracePeriod=600 Nov 27 17:31:38 crc kubenswrapper[4792]: I1127 17:31:38.773341 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fff8f565-9t8rn" event={"ID":"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b","Type":"ContainerStarted","Data":"73bf45c09487186902a801a1978b1f26d7167cb3b0b73cb11ce4b69c449ff8c8"} Nov 27 17:31:38 crc kubenswrapper[4792]: I1127 17:31:38.773721 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fff8f565-9t8rn" event={"ID":"1972fb8a-2570-4dd8-8ae1-b3fccf229e4b","Type":"ContainerStarted","Data":"8d316008162b186732958f676d08c530f9b1784cb5607a2277a2b9b1e93ebc4f"} Nov 27 17:31:38 crc kubenswrapper[4792]: I1127 17:31:38.775130 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:31:38 crc kubenswrapper[4792]: I1127 17:31:38.777762 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="96c8b617d1cd650967466a2e285f319ed4525e9b0567767b82907caf8e1a4e24" exitCode=0 Nov 27 17:31:38 crc kubenswrapper[4792]: I1127 17:31:38.777927 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"96c8b617d1cd650967466a2e285f319ed4525e9b0567767b82907caf8e1a4e24"} Nov 27 17:31:38 crc kubenswrapper[4792]: I1127 17:31:38.777958 4792 scope.go:117] "RemoveContainer" containerID="d60cedcb892e88638661f9a31eeedcc56ec861fc1db68b55e1cd3c8c8a97edef" Nov 27 17:31:38 crc kubenswrapper[4792]: I1127 17:31:38.845980 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5fff8f565-9t8rn" podStartSLOduration=2.8459584749999998 podStartE2EDuration="2.845958475s" podCreationTimestamp="2025-11-27 17:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:38.833276479 +0000 UTC m=+1321.176102797" watchObservedRunningTime="2025-11-27 17:31:38.845958475 +0000 UTC m=+1321.188784793" Nov 27 17:31:39 crc kubenswrapper[4792]: I1127 17:31:39.792874 4792 generic.go:334] "Generic (PLEG): container finished" podID="477a2f0e-b663-45b8-8541-f4d93f420304" containerID="bdb7e5400f0d3628a2149c8d4d991b96cd85b6980fc6794084a1d6301e9320be" exitCode=0 Nov 27 17:31:39 crc kubenswrapper[4792]: I1127 17:31:39.792936 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xbj9c" event={"ID":"477a2f0e-b663-45b8-8541-f4d93f420304","Type":"ContainerDied","Data":"bdb7e5400f0d3628a2149c8d4d991b96cd85b6980fc6794084a1d6301e9320be"} Nov 27 17:31:41 crc kubenswrapper[4792]: I1127 17:31:41.831449 4792 generic.go:334] "Generic (PLEG): container finished" podID="b22e78cf-a700-4a68-8b79-fdb0dc988a04" containerID="ffc4651c0ee50b61bc6b7e81cb27476ff6cbccd8a03264d28e8e4b6f1db7a23a" exitCode=0 Nov 27 17:31:41 crc kubenswrapper[4792]: I1127 17:31:41.831580 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4xhxm" event={"ID":"b22e78cf-a700-4a68-8b79-fdb0dc988a04","Type":"ContainerDied","Data":"ffc4651c0ee50b61bc6b7e81cb27476ff6cbccd8a03264d28e8e4b6f1db7a23a"} Nov 27 17:31:41 crc kubenswrapper[4792]: I1127 17:31:41.860997 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 27 17:31:41 crc kubenswrapper[4792]: I1127 17:31:41.861127 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 27 17:31:41 crc kubenswrapper[4792]: I1127 17:31:41.932289 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 27 17:31:41 crc kubenswrapper[4792]: I1127 17:31:41.933786 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 27 17:31:42 crc kubenswrapper[4792]: I1127 17:31:42.300386 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:42 crc kubenswrapper[4792]: I1127 17:31:42.300494 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:42 crc kubenswrapper[4792]: I1127 17:31:42.360361 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:42 crc kubenswrapper[4792]: I1127 17:31:42.390378 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:42 crc kubenswrapper[4792]: I1127 17:31:42.846046 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:42 crc kubenswrapper[4792]: I1127 17:31:42.846600 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 27 17:31:42 crc kubenswrapper[4792]: I1127 17:31:42.846736 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:42 crc kubenswrapper[4792]: I1127 17:31:42.846769 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.025399 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-658m9" Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.149756 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-config-data\") pod \"821eeff5-b48a-4380-986e-9a9f3bb929eb\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.150146 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7lh4\" (UniqueName: \"kubernetes.io/projected/821eeff5-b48a-4380-986e-9a9f3bb929eb-kube-api-access-b7lh4\") pod \"821eeff5-b48a-4380-986e-9a9f3bb929eb\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.150221 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-scripts\") pod \"821eeff5-b48a-4380-986e-9a9f3bb929eb\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.150329 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-combined-ca-bundle\") pod \"821eeff5-b48a-4380-986e-9a9f3bb929eb\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.150354 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/821eeff5-b48a-4380-986e-9a9f3bb929eb-logs\") pod \"821eeff5-b48a-4380-986e-9a9f3bb929eb\" (UID: \"821eeff5-b48a-4380-986e-9a9f3bb929eb\") " Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.151524 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/821eeff5-b48a-4380-986e-9a9f3bb929eb-logs" (OuterVolumeSpecName: "logs") pod "821eeff5-b48a-4380-986e-9a9f3bb929eb" (UID: "821eeff5-b48a-4380-986e-9a9f3bb929eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.156343 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-scripts" (OuterVolumeSpecName: "scripts") pod "821eeff5-b48a-4380-986e-9a9f3bb929eb" (UID: "821eeff5-b48a-4380-986e-9a9f3bb929eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.168902 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821eeff5-b48a-4380-986e-9a9f3bb929eb-kube-api-access-b7lh4" (OuterVolumeSpecName: "kube-api-access-b7lh4") pod "821eeff5-b48a-4380-986e-9a9f3bb929eb" (UID: "821eeff5-b48a-4380-986e-9a9f3bb929eb"). InnerVolumeSpecName "kube-api-access-b7lh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.194207 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-config-data" (OuterVolumeSpecName: "config-data") pod "821eeff5-b48a-4380-986e-9a9f3bb929eb" (UID: "821eeff5-b48a-4380-986e-9a9f3bb929eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.219093 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "821eeff5-b48a-4380-986e-9a9f3bb929eb" (UID: "821eeff5-b48a-4380-986e-9a9f3bb929eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.253556 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.253594 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/821eeff5-b48a-4380-986e-9a9f3bb929eb-logs\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.253603 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.253611 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7lh4\" (UniqueName: \"kubernetes.io/projected/821eeff5-b48a-4380-986e-9a9f3bb929eb-kube-api-access-b7lh4\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.253623 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/821eeff5-b48a-4380-986e-9a9f3bb929eb-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.802838 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.871745 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-658m9" event={"ID":"821eeff5-b48a-4380-986e-9a9f3bb929eb","Type":"ContainerDied","Data":"67b4c44b6cdca1a868fa36d5c7acf4b2af7e3e55ab33e7ba57f8e5cd834abbc1"} Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.871825 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67b4c44b6cdca1a868fa36d5c7acf4b2af7e3e55ab33e7ba57f8e5cd834abbc1" Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.871924 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-658m9" Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.878917 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-r62mm"] Nov 27 17:31:43 crc kubenswrapper[4792]: I1127 17:31:43.879212 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" podUID="c8048274-9eb3-4d70-aa9d-63bf6f4d210e" containerName="dnsmasq-dns" containerID="cri-o://73ce52adb8fdf3cb7d6f0c2b0f45dd76a92dd408329e1af66afa39ea26640272" gracePeriod=10 Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.177175 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" podUID="c8048274-9eb3-4d70-aa9d-63bf6f4d210e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.182:5353: connect: connection refused" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.178234 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-68b5cd97dd-hfxs2"] Nov 27 17:31:44 crc kubenswrapper[4792]: E1127 17:31:44.178739 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821eeff5-b48a-4380-986e-9a9f3bb929eb" containerName="placement-db-sync" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.178755 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="821eeff5-b48a-4380-986e-9a9f3bb929eb" containerName="placement-db-sync" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.180275 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="821eeff5-b48a-4380-986e-9a9f3bb929eb" containerName="placement-db-sync" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.181373 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.188346 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.188626 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.189077 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.191618 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68b5cd97dd-hfxs2"] Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.194061 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ljsgq" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.195868 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.287410 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26b3eb4f-347e-4da5-8da9-56f7620f43a8-scripts\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.287592 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26b3eb4f-347e-4da5-8da9-56f7620f43a8-logs\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.287666 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b3eb4f-347e-4da5-8da9-56f7620f43a8-config-data\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.287753 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b3eb4f-347e-4da5-8da9-56f7620f43a8-public-tls-certs\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.287979 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmk9k\" (UniqueName: \"kubernetes.io/projected/26b3eb4f-347e-4da5-8da9-56f7620f43a8-kube-api-access-vmk9k\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.288081 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b3eb4f-347e-4da5-8da9-56f7620f43a8-combined-ca-bundle\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.288172 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b3eb4f-347e-4da5-8da9-56f7620f43a8-internal-tls-certs\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.390050 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b3eb4f-347e-4da5-8da9-56f7620f43a8-public-tls-certs\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.390123 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmk9k\" (UniqueName: \"kubernetes.io/projected/26b3eb4f-347e-4da5-8da9-56f7620f43a8-kube-api-access-vmk9k\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.390148 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b3eb4f-347e-4da5-8da9-56f7620f43a8-combined-ca-bundle\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.390189 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b3eb4f-347e-4da5-8da9-56f7620f43a8-internal-tls-certs\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.390268 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26b3eb4f-347e-4da5-8da9-56f7620f43a8-scripts\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.390335 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26b3eb4f-347e-4da5-8da9-56f7620f43a8-logs\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.390357 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b3eb4f-347e-4da5-8da9-56f7620f43a8-config-data\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.391124 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26b3eb4f-347e-4da5-8da9-56f7620f43a8-logs\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.397020 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b3eb4f-347e-4da5-8da9-56f7620f43a8-internal-tls-certs\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.397020 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b3eb4f-347e-4da5-8da9-56f7620f43a8-combined-ca-bundle\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.397167 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26b3eb4f-347e-4da5-8da9-56f7620f43a8-public-tls-certs\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.397414 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26b3eb4f-347e-4da5-8da9-56f7620f43a8-scripts\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.398032 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b3eb4f-347e-4da5-8da9-56f7620f43a8-config-data\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.408066 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmk9k\" (UniqueName: \"kubernetes.io/projected/26b3eb4f-347e-4da5-8da9-56f7620f43a8-kube-api-access-vmk9k\") pod \"placement-68b5cd97dd-hfxs2\" (UID: \"26b3eb4f-347e-4da5-8da9-56f7620f43a8\") " pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.497226 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.887260 4792 generic.go:334] "Generic (PLEG): container finished" podID="c8048274-9eb3-4d70-aa9d-63bf6f4d210e" containerID="73ce52adb8fdf3cb7d6f0c2b0f45dd76a92dd408329e1af66afa39ea26640272" exitCode=0 Nov 27 17:31:44 crc kubenswrapper[4792]: I1127 17:31:44.887306 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" event={"ID":"c8048274-9eb3-4d70-aa9d-63bf6f4d210e","Type":"ContainerDied","Data":"73ce52adb8fdf3cb7d6f0c2b0f45dd76a92dd408329e1af66afa39ea26640272"} Nov 27 17:31:45 crc kubenswrapper[4792]: I1127 17:31:45.906859 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4xhxm" Nov 27 17:31:45 crc kubenswrapper[4792]: I1127 17:31:45.926972 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4xhxm" event={"ID":"b22e78cf-a700-4a68-8b79-fdb0dc988a04","Type":"ContainerDied","Data":"2aecc80265509e9d667bfc705612c6a9134c5a7b7eb881d1f28d480ba6661703"} Nov 27 17:31:45 crc kubenswrapper[4792]: I1127 17:31:45.927009 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aecc80265509e9d667bfc705612c6a9134c5a7b7eb881d1f28d480ba6661703" Nov 27 17:31:45 crc kubenswrapper[4792]: I1127 17:31:45.927116 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4xhxm" Nov 27 17:31:45 crc kubenswrapper[4792]: I1127 17:31:45.934018 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:45 crc kubenswrapper[4792]: I1127 17:31:45.934878 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xbj9c" event={"ID":"477a2f0e-b663-45b8-8541-f4d93f420304","Type":"ContainerDied","Data":"037fb098f31f6a09beaa718f109979106939898206e2ce54821f4e321ed1ba01"} Nov 27 17:31:45 crc kubenswrapper[4792]: I1127 17:31:45.934984 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="037fb098f31f6a09beaa718f109979106939898206e2ce54821f4e321ed1ba01" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.031627 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-credential-keys\") pod \"477a2f0e-b663-45b8-8541-f4d93f420304\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.031935 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b22e78cf-a700-4a68-8b79-fdb0dc988a04-combined-ca-bundle\") pod \"b22e78cf-a700-4a68-8b79-fdb0dc988a04\" (UID: \"b22e78cf-a700-4a68-8b79-fdb0dc988a04\") " Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.032020 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b22e78cf-a700-4a68-8b79-fdb0dc988a04-db-sync-config-data\") pod \"b22e78cf-a700-4a68-8b79-fdb0dc988a04\" (UID: \"b22e78cf-a700-4a68-8b79-fdb0dc988a04\") " Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.032585 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-config-data\") pod \"477a2f0e-b663-45b8-8541-f4d93f420304\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.032621 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-combined-ca-bundle\") pod \"477a2f0e-b663-45b8-8541-f4d93f420304\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.032668 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-fernet-keys\") pod \"477a2f0e-b663-45b8-8541-f4d93f420304\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.032709 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-scripts\") pod \"477a2f0e-b663-45b8-8541-f4d93f420304\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.032735 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrk7d\" (UniqueName: \"kubernetes.io/projected/b22e78cf-a700-4a68-8b79-fdb0dc988a04-kube-api-access-xrk7d\") pod \"b22e78cf-a700-4a68-8b79-fdb0dc988a04\" (UID: \"b22e78cf-a700-4a68-8b79-fdb0dc988a04\") " Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.032875 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5zpk\" (UniqueName: \"kubernetes.io/projected/477a2f0e-b663-45b8-8541-f4d93f420304-kube-api-access-l5zpk\") pod \"477a2f0e-b663-45b8-8541-f4d93f420304\" (UID: \"477a2f0e-b663-45b8-8541-f4d93f420304\") " Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.037391 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/477a2f0e-b663-45b8-8541-f4d93f420304-kube-api-access-l5zpk" (OuterVolumeSpecName: "kube-api-access-l5zpk") pod "477a2f0e-b663-45b8-8541-f4d93f420304" (UID: "477a2f0e-b663-45b8-8541-f4d93f420304"). InnerVolumeSpecName "kube-api-access-l5zpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.037795 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-scripts" (OuterVolumeSpecName: "scripts") pod "477a2f0e-b663-45b8-8541-f4d93f420304" (UID: "477a2f0e-b663-45b8-8541-f4d93f420304"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.042728 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "477a2f0e-b663-45b8-8541-f4d93f420304" (UID: "477a2f0e-b663-45b8-8541-f4d93f420304"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.042834 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b22e78cf-a700-4a68-8b79-fdb0dc988a04-kube-api-access-xrk7d" (OuterVolumeSpecName: "kube-api-access-xrk7d") pod "b22e78cf-a700-4a68-8b79-fdb0dc988a04" (UID: "b22e78cf-a700-4a68-8b79-fdb0dc988a04"). InnerVolumeSpecName "kube-api-access-xrk7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.043980 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b22e78cf-a700-4a68-8b79-fdb0dc988a04-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b22e78cf-a700-4a68-8b79-fdb0dc988a04" (UID: "b22e78cf-a700-4a68-8b79-fdb0dc988a04"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.044105 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "477a2f0e-b663-45b8-8541-f4d93f420304" (UID: "477a2f0e-b663-45b8-8541-f4d93f420304"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.069725 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "477a2f0e-b663-45b8-8541-f4d93f420304" (UID: "477a2f0e-b663-45b8-8541-f4d93f420304"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.094758 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-config-data" (OuterVolumeSpecName: "config-data") pod "477a2f0e-b663-45b8-8541-f4d93f420304" (UID: "477a2f0e-b663-45b8-8541-f4d93f420304"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.101744 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.107817 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b22e78cf-a700-4a68-8b79-fdb0dc988a04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b22e78cf-a700-4a68-8b79-fdb0dc988a04" (UID: "b22e78cf-a700-4a68-8b79-fdb0dc988a04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.135297 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5zpk\" (UniqueName: \"kubernetes.io/projected/477a2f0e-b663-45b8-8541-f4d93f420304-kube-api-access-l5zpk\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.135330 4792 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.135339 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b22e78cf-a700-4a68-8b79-fdb0dc988a04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.135347 4792 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b22e78cf-a700-4a68-8b79-fdb0dc988a04-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.135355 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.135365 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.135372 4792 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.135380 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/477a2f0e-b663-45b8-8541-f4d93f420304-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.135388 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrk7d\" (UniqueName: \"kubernetes.io/projected/b22e78cf-a700-4a68-8b79-fdb0dc988a04-kube-api-access-xrk7d\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.239447 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7p74\" (UniqueName: \"kubernetes.io/projected/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-kube-api-access-x7p74\") pod \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.239918 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-dns-swift-storage-0\") pod \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.240004 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-dns-svc\") pod \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.240146 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-ovsdbserver-sb\") pod \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.240182 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-config\") pod \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.240219 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-ovsdbserver-nb\") pod \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\" (UID: \"c8048274-9eb3-4d70-aa9d-63bf6f4d210e\") " Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.254254 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-kube-api-access-x7p74" (OuterVolumeSpecName: "kube-api-access-x7p74") pod "c8048274-9eb3-4d70-aa9d-63bf6f4d210e" (UID: "c8048274-9eb3-4d70-aa9d-63bf6f4d210e"). InnerVolumeSpecName "kube-api-access-x7p74". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.309374 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c8048274-9eb3-4d70-aa9d-63bf6f4d210e" (UID: "c8048274-9eb3-4d70-aa9d-63bf6f4d210e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.317724 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c8048274-9eb3-4d70-aa9d-63bf6f4d210e" (UID: "c8048274-9eb3-4d70-aa9d-63bf6f4d210e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.321554 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c8048274-9eb3-4d70-aa9d-63bf6f4d210e" (UID: "c8048274-9eb3-4d70-aa9d-63bf6f4d210e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.338121 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8048274-9eb3-4d70-aa9d-63bf6f4d210e" (UID: "c8048274-9eb3-4d70-aa9d-63bf6f4d210e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.343101 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.343130 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.343149 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.343161 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7p74\" (UniqueName: \"kubernetes.io/projected/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-kube-api-access-x7p74\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.343176 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.343771 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-config" (OuterVolumeSpecName: "config") pod "c8048274-9eb3-4d70-aa9d-63bf6f4d210e" (UID: "c8048274-9eb3-4d70-aa9d-63bf6f4d210e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:46 crc kubenswrapper[4792]: W1127 17:31:46.440790 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26b3eb4f_347e_4da5_8da9_56f7620f43a8.slice/crio-1faaf9e8eb594fc3571b737a8be0169de1bac4e0429ac5fb6d907f77e3ae8b7f WatchSource:0}: Error finding container 1faaf9e8eb594fc3571b737a8be0169de1bac4e0429ac5fb6d907f77e3ae8b7f: Status 404 returned error can't find the container with id 1faaf9e8eb594fc3571b737a8be0169de1bac4e0429ac5fb6d907f77e3ae8b7f Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.445084 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8048274-9eb3-4d70-aa9d-63bf6f4d210e-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.446305 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-68b5cd97dd-hfxs2"] Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.948829 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68b5cd97dd-hfxs2" event={"ID":"26b3eb4f-347e-4da5-8da9-56f7620f43a8","Type":"ContainerStarted","Data":"b24b47ffb4a96c3e8f623011e33a798660644bc4ed29aec8f7dc81ad958d2406"} Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.949377 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68b5cd97dd-hfxs2" event={"ID":"26b3eb4f-347e-4da5-8da9-56f7620f43a8","Type":"ContainerStarted","Data":"1faaf9e8eb594fc3571b737a8be0169de1bac4e0429ac5fb6d907f77e3ae8b7f"} Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.957569 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3efaa573-2d1c-4668-a6bc-b50aa892a299","Type":"ContainerStarted","Data":"2a8e7214f731a7ff19ca3cf44a30b1a69708bebf7b52fc93e4ad6171a24b1fd9"} Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.966412 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rr9ql" event={"ID":"1e3f2e74-1077-4a57-9851-1113b4a46729","Type":"ContainerStarted","Data":"bd1041b51280eda217136c8218c55d59a70365228887278f1c67a961cbe62098"} Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.973524 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" event={"ID":"c8048274-9eb3-4d70-aa9d-63bf6f4d210e","Type":"ContainerDied","Data":"eda53e555215000e9cbec577f32382fecac20e27a2e46291f4e99d79e776a3be"} Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.973586 4792 scope.go:117] "RemoveContainer" containerID="73ce52adb8fdf3cb7d6f0c2b0f45dd76a92dd408329e1af66afa39ea26640272" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.973775 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-r62mm" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.991509 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xbj9c" Nov 27 17:31:46 crc kubenswrapper[4792]: I1127 17:31:46.992820 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09"} Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.010508 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-rr9ql" podStartSLOduration=2.677715038 podStartE2EDuration="44.010482371s" podCreationTimestamp="2025-11-27 17:31:03 +0000 UTC" firstStartedPulling="2025-11-27 17:31:04.59412577 +0000 UTC m=+1286.936952088" lastFinishedPulling="2025-11-27 17:31:45.926893103 +0000 UTC m=+1328.269719421" observedRunningTime="2025-11-27 17:31:46.990980445 +0000 UTC m=+1329.333806763" watchObservedRunningTime="2025-11-27 17:31:47.010482371 +0000 UTC m=+1329.353308689" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.018967 4792 scope.go:117] "RemoveContainer" containerID="e3f0956425a6e41bab78df2453e071191b2a202e75285fe81dc8c9813d41a2c7" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.207848 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-r62mm"] Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.249469 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-r62mm"] Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.296629 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-b9854ff99-9l5qq"] Nov 27 17:31:47 crc kubenswrapper[4792]: E1127 17:31:47.297186 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8048274-9eb3-4d70-aa9d-63bf6f4d210e" containerName="dnsmasq-dns" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.297204 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8048274-9eb3-4d70-aa9d-63bf6f4d210e" containerName="dnsmasq-dns" Nov 27 17:31:47 crc kubenswrapper[4792]: E1127 17:31:47.297228 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477a2f0e-b663-45b8-8541-f4d93f420304" containerName="keystone-bootstrap" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.297236 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="477a2f0e-b663-45b8-8541-f4d93f420304" containerName="keystone-bootstrap" Nov 27 17:31:47 crc kubenswrapper[4792]: E1127 17:31:47.297256 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8048274-9eb3-4d70-aa9d-63bf6f4d210e" containerName="init" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.297264 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8048274-9eb3-4d70-aa9d-63bf6f4d210e" containerName="init" Nov 27 17:31:47 crc kubenswrapper[4792]: E1127 17:31:47.297305 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b22e78cf-a700-4a68-8b79-fdb0dc988a04" containerName="barbican-db-sync" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.297313 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b22e78cf-a700-4a68-8b79-fdb0dc988a04" containerName="barbican-db-sync" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.297548 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="477a2f0e-b663-45b8-8541-f4d93f420304" containerName="keystone-bootstrap" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.297565 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8048274-9eb3-4d70-aa9d-63bf6f4d210e" containerName="dnsmasq-dns" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.297586 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b22e78cf-a700-4a68-8b79-fdb0dc988a04" containerName="barbican-db-sync" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.314162 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.315427 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-b9854ff99-9l5qq"] Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.318271 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.318454 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vqq54" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.318963 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.337150 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6667648786-v844v"] Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.338783 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.346966 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.347324 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.347407 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-p6xs8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.347474 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.347547 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.347819 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.371236 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6667648786-v844v"] Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.384500 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-fernet-keys\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.384595 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-public-tls-certs\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.384627 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64269486-bbcb-49d2-ab84-0591965b9277-config-data-custom\") pod \"barbican-worker-b9854ff99-9l5qq\" (UID: \"64269486-bbcb-49d2-ab84-0591965b9277\") " pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.384679 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59hjv\" (UniqueName: \"kubernetes.io/projected/64269486-bbcb-49d2-ab84-0591965b9277-kube-api-access-59hjv\") pod \"barbican-worker-b9854ff99-9l5qq\" (UID: \"64269486-bbcb-49d2-ab84-0591965b9277\") " pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.384696 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrq2s\" (UniqueName: \"kubernetes.io/projected/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-kube-api-access-xrq2s\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.384738 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-scripts\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.384981 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64269486-bbcb-49d2-ab84-0591965b9277-config-data\") pod \"barbican-worker-b9854ff99-9l5qq\" (UID: \"64269486-bbcb-49d2-ab84-0591965b9277\") " pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.385036 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64269486-bbcb-49d2-ab84-0591965b9277-logs\") pod \"barbican-worker-b9854ff99-9l5qq\" (UID: \"64269486-bbcb-49d2-ab84-0591965b9277\") " pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.385212 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-credential-keys\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.385274 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-config-data\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.385289 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-combined-ca-bundle\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.385342 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64269486-bbcb-49d2-ab84-0591965b9277-combined-ca-bundle\") pod \"barbican-worker-b9854ff99-9l5qq\" (UID: \"64269486-bbcb-49d2-ab84-0591965b9277\") " pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.385365 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-internal-tls-certs\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.394024 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6ff478798d-8s6ww"] Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.395829 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.398580 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.406693 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6ff478798d-8s6ww"] Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.414383 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-gnpw8"] Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.416353 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.424290 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-gnpw8"] Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.433792 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6bf4b697fd-b9sd9"] Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.435624 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.438770 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.465860 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bf4b697fd-b9sd9"] Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491332 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491384 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m772t\" (UniqueName: \"kubernetes.io/projected/ee822979-d609-4c65-a7e6-290c9da32f04-kube-api-access-m772t\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491424 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrc9q\" (UniqueName: \"kubernetes.io/projected/13220316-7055-49fc-9b5b-747155332282-kube-api-access-mrc9q\") pod \"barbican-api-6bf4b697fd-b9sd9\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491462 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64269486-bbcb-49d2-ab84-0591965b9277-config-data\") pod \"barbican-worker-b9854ff99-9l5qq\" (UID: \"64269486-bbcb-49d2-ab84-0591965b9277\") " pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491490 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8029b26-0d9c-428e-af30-62c262f079f4-config-data-custom\") pod \"barbican-keystone-listener-6ff478798d-8s6ww\" (UID: \"a8029b26-0d9c-428e-af30-62c262f079f4\") " pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491518 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64269486-bbcb-49d2-ab84-0591965b9277-logs\") pod \"barbican-worker-b9854ff99-9l5qq\" (UID: \"64269486-bbcb-49d2-ab84-0591965b9277\") " pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491633 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491689 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-dns-svc\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491719 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13220316-7055-49fc-9b5b-747155332282-logs\") pod \"barbican-api-6bf4b697fd-b9sd9\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491744 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8029b26-0d9c-428e-af30-62c262f079f4-combined-ca-bundle\") pod \"barbican-keystone-listener-6ff478798d-8s6ww\" (UID: \"a8029b26-0d9c-428e-af30-62c262f079f4\") " pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491794 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491834 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-credential-keys\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491880 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-config-data\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491900 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-combined-ca-bundle\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491941 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64269486-bbcb-49d2-ab84-0591965b9277-combined-ca-bundle\") pod \"barbican-worker-b9854ff99-9l5qq\" (UID: \"64269486-bbcb-49d2-ab84-0591965b9277\") " pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491968 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-internal-tls-certs\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.491997 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfpwf\" (UniqueName: \"kubernetes.io/projected/a8029b26-0d9c-428e-af30-62c262f079f4-kube-api-access-wfpwf\") pod \"barbican-keystone-listener-6ff478798d-8s6ww\" (UID: \"a8029b26-0d9c-428e-af30-62c262f079f4\") " pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.492034 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-fernet-keys\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.492053 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-config-data\") pod \"barbican-api-6bf4b697fd-b9sd9\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.492092 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8029b26-0d9c-428e-af30-62c262f079f4-config-data\") pod \"barbican-keystone-listener-6ff478798d-8s6ww\" (UID: \"a8029b26-0d9c-428e-af30-62c262f079f4\") " pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.492120 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-config-data-custom\") pod \"barbican-api-6bf4b697fd-b9sd9\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.492140 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8029b26-0d9c-428e-af30-62c262f079f4-logs\") pod \"barbican-keystone-listener-6ff478798d-8s6ww\" (UID: \"a8029b26-0d9c-428e-af30-62c262f079f4\") " pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.492173 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-config\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.492199 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-public-tls-certs\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.492234 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64269486-bbcb-49d2-ab84-0591965b9277-config-data-custom\") pod \"barbican-worker-b9854ff99-9l5qq\" (UID: \"64269486-bbcb-49d2-ab84-0591965b9277\") " pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.492256 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59hjv\" (UniqueName: \"kubernetes.io/projected/64269486-bbcb-49d2-ab84-0591965b9277-kube-api-access-59hjv\") pod \"barbican-worker-b9854ff99-9l5qq\" (UID: \"64269486-bbcb-49d2-ab84-0591965b9277\") " pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.492277 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrq2s\" (UniqueName: \"kubernetes.io/projected/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-kube-api-access-xrq2s\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.492434 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-scripts\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.492499 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-combined-ca-bundle\") pod \"barbican-api-6bf4b697fd-b9sd9\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.495683 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64269486-bbcb-49d2-ab84-0591965b9277-logs\") pod \"barbican-worker-b9854ff99-9l5qq\" (UID: \"64269486-bbcb-49d2-ab84-0591965b9277\") " pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.499914 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-fernet-keys\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.500150 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-internal-tls-certs\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.500498 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64269486-bbcb-49d2-ab84-0591965b9277-config-data\") pod \"barbican-worker-b9854ff99-9l5qq\" (UID: \"64269486-bbcb-49d2-ab84-0591965b9277\") " pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.501274 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64269486-bbcb-49d2-ab84-0591965b9277-config-data-custom\") pod \"barbican-worker-b9854ff99-9l5qq\" (UID: \"64269486-bbcb-49d2-ab84-0591965b9277\") " pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.504279 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-public-tls-certs\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.505072 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-config-data\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.509876 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-scripts\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.511396 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-credential-keys\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.511486 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64269486-bbcb-49d2-ab84-0591965b9277-combined-ca-bundle\") pod \"barbican-worker-b9854ff99-9l5qq\" (UID: \"64269486-bbcb-49d2-ab84-0591965b9277\") " pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.512011 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-combined-ca-bundle\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.514808 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrq2s\" (UniqueName: \"kubernetes.io/projected/f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7-kube-api-access-xrq2s\") pod \"keystone-6667648786-v844v\" (UID: \"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7\") " pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.515181 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59hjv\" (UniqueName: \"kubernetes.io/projected/64269486-bbcb-49d2-ab84-0591965b9277-kube-api-access-59hjv\") pod \"barbican-worker-b9854ff99-9l5qq\" (UID: \"64269486-bbcb-49d2-ab84-0591965b9277\") " pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596419 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfpwf\" (UniqueName: \"kubernetes.io/projected/a8029b26-0d9c-428e-af30-62c262f079f4-kube-api-access-wfpwf\") pod \"barbican-keystone-listener-6ff478798d-8s6ww\" (UID: \"a8029b26-0d9c-428e-af30-62c262f079f4\") " pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596472 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-config-data\") pod \"barbican-api-6bf4b697fd-b9sd9\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596503 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8029b26-0d9c-428e-af30-62c262f079f4-config-data\") pod \"barbican-keystone-listener-6ff478798d-8s6ww\" (UID: \"a8029b26-0d9c-428e-af30-62c262f079f4\") " pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596526 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-config-data-custom\") pod \"barbican-api-6bf4b697fd-b9sd9\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596542 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8029b26-0d9c-428e-af30-62c262f079f4-logs\") pod \"barbican-keystone-listener-6ff478798d-8s6ww\" (UID: \"a8029b26-0d9c-428e-af30-62c262f079f4\") " pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596566 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-config\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596605 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-combined-ca-bundle\") pod \"barbican-api-6bf4b697fd-b9sd9\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596656 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596673 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m772t\" (UniqueName: \"kubernetes.io/projected/ee822979-d609-4c65-a7e6-290c9da32f04-kube-api-access-m772t\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596698 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrc9q\" (UniqueName: \"kubernetes.io/projected/13220316-7055-49fc-9b5b-747155332282-kube-api-access-mrc9q\") pod \"barbican-api-6bf4b697fd-b9sd9\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596720 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8029b26-0d9c-428e-af30-62c262f079f4-config-data-custom\") pod \"barbican-keystone-listener-6ff478798d-8s6ww\" (UID: \"a8029b26-0d9c-428e-af30-62c262f079f4\") " pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596747 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596769 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-dns-svc\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596790 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13220316-7055-49fc-9b5b-747155332282-logs\") pod \"barbican-api-6bf4b697fd-b9sd9\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596810 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8029b26-0d9c-428e-af30-62c262f079f4-combined-ca-bundle\") pod \"barbican-keystone-listener-6ff478798d-8s6ww\" (UID: \"a8029b26-0d9c-428e-af30-62c262f079f4\") " pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.596840 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.597579 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13220316-7055-49fc-9b5b-747155332282-logs\") pod \"barbican-api-6bf4b697fd-b9sd9\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.597855 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.598017 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.598089 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8029b26-0d9c-428e-af30-62c262f079f4-logs\") pod \"barbican-keystone-listener-6ff478798d-8s6ww\" (UID: \"a8029b26-0d9c-428e-af30-62c262f079f4\") " pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.598543 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-dns-svc\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.598820 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-config\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.599236 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.604160 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8029b26-0d9c-428e-af30-62c262f079f4-combined-ca-bundle\") pod \"barbican-keystone-listener-6ff478798d-8s6ww\" (UID: \"a8029b26-0d9c-428e-af30-62c262f079f4\") " pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.604557 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-combined-ca-bundle\") pod \"barbican-api-6bf4b697fd-b9sd9\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.604971 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-config-data\") pod \"barbican-api-6bf4b697fd-b9sd9\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.608071 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8029b26-0d9c-428e-af30-62c262f079f4-config-data-custom\") pod \"barbican-keystone-listener-6ff478798d-8s6ww\" (UID: \"a8029b26-0d9c-428e-af30-62c262f079f4\") " pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.613128 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8029b26-0d9c-428e-af30-62c262f079f4-config-data\") pod \"barbican-keystone-listener-6ff478798d-8s6ww\" (UID: \"a8029b26-0d9c-428e-af30-62c262f079f4\") " pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.617116 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfpwf\" (UniqueName: \"kubernetes.io/projected/a8029b26-0d9c-428e-af30-62c262f079f4-kube-api-access-wfpwf\") pod \"barbican-keystone-listener-6ff478798d-8s6ww\" (UID: \"a8029b26-0d9c-428e-af30-62c262f079f4\") " pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.617350 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-config-data-custom\") pod \"barbican-api-6bf4b697fd-b9sd9\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.620510 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrc9q\" (UniqueName: \"kubernetes.io/projected/13220316-7055-49fc-9b5b-747155332282-kube-api-access-mrc9q\") pod \"barbican-api-6bf4b697fd-b9sd9\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.621326 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m772t\" (UniqueName: \"kubernetes.io/projected/ee822979-d609-4c65-a7e6-290c9da32f04-kube-api-access-m772t\") pod \"dnsmasq-dns-688c87cc99-gnpw8\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.655261 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-b9854ff99-9l5qq" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.676721 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.716364 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.759227 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:47 crc kubenswrapper[4792]: I1127 17:31:47.771189 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:48 crc kubenswrapper[4792]: I1127 17:31:48.024254 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 27 17:31:48 crc kubenswrapper[4792]: I1127 17:31:48.024297 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:48 crc kubenswrapper[4792]: I1127 17:31:48.024788 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 27 17:31:48 crc kubenswrapper[4792]: I1127 17:31:48.026634 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 27 17:31:48 crc kubenswrapper[4792]: I1127 17:31:48.029414 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 27 17:31:48 crc kubenswrapper[4792]: I1127 17:31:48.089223 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-68b5cd97dd-hfxs2" event={"ID":"26b3eb4f-347e-4da5-8da9-56f7620f43a8","Type":"ContainerStarted","Data":"392ffa74373892d78b1f92747f36f3323a179f2fd9b1b6e727c30e3242edc7ef"} Nov 27 17:31:48 crc kubenswrapper[4792]: I1127 17:31:48.089263 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:48 crc kubenswrapper[4792]: I1127 17:31:48.089275 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:31:48 crc kubenswrapper[4792]: I1127 17:31:48.201529 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-68b5cd97dd-hfxs2" podStartSLOduration=4.201507697 podStartE2EDuration="4.201507697s" podCreationTimestamp="2025-11-27 17:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:48.176659508 +0000 UTC m=+1330.519485826" watchObservedRunningTime="2025-11-27 17:31:48.201507697 +0000 UTC m=+1330.544334005" Nov 27 17:31:48 crc kubenswrapper[4792]: I1127 17:31:48.265535 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 27 17:31:48 crc kubenswrapper[4792]: I1127 17:31:48.398752 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6667648786-v844v"] Nov 27 17:31:48 crc kubenswrapper[4792]: I1127 17:31:48.651897 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-b9854ff99-9l5qq"] Nov 27 17:31:48 crc kubenswrapper[4792]: I1127 17:31:48.684318 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6ff478798d-8s6ww"] Nov 27 17:31:48 crc kubenswrapper[4792]: I1127 17:31:48.747294 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8048274-9eb3-4d70-aa9d-63bf6f4d210e" path="/var/lib/kubelet/pods/c8048274-9eb3-4d70-aa9d-63bf6f4d210e/volumes" Nov 27 17:31:49 crc kubenswrapper[4792]: I1127 17:31:49.000086 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bf4b697fd-b9sd9"] Nov 27 17:31:49 crc kubenswrapper[4792]: I1127 17:31:49.022319 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-gnpw8"] Nov 27 17:31:49 crc kubenswrapper[4792]: W1127 17:31:49.027836 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13220316_7055_49fc_9b5b_747155332282.slice/crio-b6805285adcea5faf045cb00a5c359f2054b03a8bc1e9946e0f9e2c1133b8f43 WatchSource:0}: Error finding container b6805285adcea5faf045cb00a5c359f2054b03a8bc1e9946e0f9e2c1133b8f43: Status 404 returned error can't find the container with id b6805285adcea5faf045cb00a5c359f2054b03a8bc1e9946e0f9e2c1133b8f43 Nov 27 17:31:49 crc kubenswrapper[4792]: W1127 17:31:49.044944 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee822979_d609_4c65_a7e6_290c9da32f04.slice/crio-5617b64d1eb5bdac8588bdd6fa05511c0db866fbcf18eb6a3adea3d786cba896 WatchSource:0}: Error finding container 5617b64d1eb5bdac8588bdd6fa05511c0db866fbcf18eb6a3adea3d786cba896: Status 404 returned error can't find the container with id 5617b64d1eb5bdac8588bdd6fa05511c0db866fbcf18eb6a3adea3d786cba896 Nov 27 17:31:49 crc kubenswrapper[4792]: I1127 17:31:49.137076 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" event={"ID":"ee822979-d609-4c65-a7e6-290c9da32f04","Type":"ContainerStarted","Data":"5617b64d1eb5bdac8588bdd6fa05511c0db866fbcf18eb6a3adea3d786cba896"} Nov 27 17:31:49 crc kubenswrapper[4792]: I1127 17:31:49.145708 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-b9854ff99-9l5qq" event={"ID":"64269486-bbcb-49d2-ab84-0591965b9277","Type":"ContainerStarted","Data":"9fee4446533167cf1bcdc792f0d1adf06fd847ae0ef61f8077935666a7d24f78"} Nov 27 17:31:49 crc kubenswrapper[4792]: I1127 17:31:49.153305 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bf4b697fd-b9sd9" event={"ID":"13220316-7055-49fc-9b5b-747155332282","Type":"ContainerStarted","Data":"b6805285adcea5faf045cb00a5c359f2054b03a8bc1e9946e0f9e2c1133b8f43"} Nov 27 17:31:49 crc kubenswrapper[4792]: I1127 17:31:49.155076 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6667648786-v844v" event={"ID":"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7","Type":"ContainerStarted","Data":"2b80bb2c6f22f67a8cb9d5dae6631427e1d7caf34b27632df9a75af5ed784f61"} Nov 27 17:31:49 crc kubenswrapper[4792]: I1127 17:31:49.155122 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6667648786-v844v" event={"ID":"f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7","Type":"ContainerStarted","Data":"9f69535e75c1fb9d24d3519b999d48b372179d0a9a1fb46e7db319832faf956e"} Nov 27 17:31:49 crc kubenswrapper[4792]: I1127 17:31:49.159366 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6667648786-v844v" Nov 27 17:31:49 crc kubenswrapper[4792]: I1127 17:31:49.177273 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" event={"ID":"a8029b26-0d9c-428e-af30-62c262f079f4","Type":"ContainerStarted","Data":"1b7459ba07d6431634081e4dcc304c7ab142e2fe26a37a8a22396b15474e9981"} Nov 27 17:31:49 crc kubenswrapper[4792]: I1127 17:31:49.191578 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6667648786-v844v" podStartSLOduration=2.191558433 podStartE2EDuration="2.191558433s" podCreationTimestamp="2025-11-27 17:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:49.176319524 +0000 UTC m=+1331.519145842" watchObservedRunningTime="2025-11-27 17:31:49.191558433 +0000 UTC m=+1331.534384751" Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.202965 4792 generic.go:334] "Generic (PLEG): container finished" podID="ee822979-d609-4c65-a7e6-290c9da32f04" containerID="47374bdca1901a6cade070c39d52aa5aca8da9eb7da303cfcbc2da5b176843f1" exitCode=0 Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.203256 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" event={"ID":"ee822979-d609-4c65-a7e6-290c9da32f04","Type":"ContainerDied","Data":"47374bdca1901a6cade070c39d52aa5aca8da9eb7da303cfcbc2da5b176843f1"} Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.209753 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bf4b697fd-b9sd9" event={"ID":"13220316-7055-49fc-9b5b-747155332282","Type":"ContainerStarted","Data":"2455a6f689166ee43f49fdb87d34ae25094bf9bd53c1809dd5c56d426bfa4325"} Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.210072 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.210100 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.210112 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bf4b697fd-b9sd9" event={"ID":"13220316-7055-49fc-9b5b-747155332282","Type":"ContainerStarted","Data":"504af4e2e96f24dfdc95937394da32df2753de9c7f77c8304925934911c42b15"} Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.255895 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6bf4b697fd-b9sd9" podStartSLOduration=3.255875291 podStartE2EDuration="3.255875291s" podCreationTimestamp="2025-11-27 17:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:50.246915267 +0000 UTC m=+1332.589741605" watchObservedRunningTime="2025-11-27 17:31:50.255875291 +0000 UTC m=+1332.598701609" Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.794479 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-65f9f97c5d-544l8"] Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.800488 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.809327 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.809551 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.811953 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-65f9f97c5d-544l8"] Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.922224 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-combined-ca-bundle\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.922284 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kg2f\" (UniqueName: \"kubernetes.io/projected/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-kube-api-access-6kg2f\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.922323 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-internal-tls-certs\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.922385 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-public-tls-certs\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.922628 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-config-data-custom\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.922666 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-logs\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:50 crc kubenswrapper[4792]: I1127 17:31:50.922727 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-config-data\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.024592 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-config-data-custom\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.024664 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-logs\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.024712 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-config-data\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.024752 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-combined-ca-bundle\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.024778 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kg2f\" (UniqueName: \"kubernetes.io/projected/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-kube-api-access-6kg2f\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.024810 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-internal-tls-certs\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.024854 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-public-tls-certs\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.027246 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-logs\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.030287 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-config-data-custom\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.031088 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-internal-tls-certs\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.031139 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-config-data\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.031737 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-combined-ca-bundle\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.040747 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-public-tls-certs\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.045423 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kg2f\" (UniqueName: \"kubernetes.io/projected/e0a4f95d-c1db-43ea-9d79-185c188a4f9b-kube-api-access-6kg2f\") pod \"barbican-api-65f9f97c5d-544l8\" (UID: \"e0a4f95d-c1db-43ea-9d79-185c188a4f9b\") " pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.162541 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.221620 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vcdv9" event={"ID":"2f1c2409-1610-4ede-ab33-880b170c802f","Type":"ContainerStarted","Data":"be55da7b0d2e853f86ec64d21d10ff1467c329f2506bba53fbfc525ec773902f"} Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.225292 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" event={"ID":"ee822979-d609-4c65-a7e6-290c9da32f04","Type":"ContainerStarted","Data":"255688fa1f5402aa3419042e7d5df1f5254169b186dc9a0b605b63524ed8e43a"} Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.243888 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-vcdv9" podStartSLOduration=3.882977493 podStartE2EDuration="48.243870926s" podCreationTimestamp="2025-11-27 17:31:03 +0000 UTC" firstStartedPulling="2025-11-27 17:31:04.978674069 +0000 UTC m=+1287.321500387" lastFinishedPulling="2025-11-27 17:31:49.339567502 +0000 UTC m=+1331.682393820" observedRunningTime="2025-11-27 17:31:51.237793855 +0000 UTC m=+1333.580620173" watchObservedRunningTime="2025-11-27 17:31:51.243870926 +0000 UTC m=+1333.586697264" Nov 27 17:31:51 crc kubenswrapper[4792]: I1127 17:31:51.264290 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" podStartSLOduration=4.264272125 podStartE2EDuration="4.264272125s" podCreationTimestamp="2025-11-27 17:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:51.257601008 +0000 UTC m=+1333.600427336" watchObservedRunningTime="2025-11-27 17:31:51.264272125 +0000 UTC m=+1333.607098443" Nov 27 17:31:52 crc kubenswrapper[4792]: I1127 17:31:52.260036 4792 generic.go:334] "Generic (PLEG): container finished" podID="1e3f2e74-1077-4a57-9851-1113b4a46729" containerID="bd1041b51280eda217136c8218c55d59a70365228887278f1c67a961cbe62098" exitCode=0 Nov 27 17:31:52 crc kubenswrapper[4792]: I1127 17:31:52.260247 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rr9ql" event={"ID":"1e3f2e74-1077-4a57-9851-1113b4a46729","Type":"ContainerDied","Data":"bd1041b51280eda217136c8218c55d59a70365228887278f1c67a961cbe62098"} Nov 27 17:31:52 crc kubenswrapper[4792]: I1127 17:31:52.261693 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:52 crc kubenswrapper[4792]: I1127 17:31:52.777878 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-65f9f97c5d-544l8"] Nov 27 17:31:53 crc kubenswrapper[4792]: I1127 17:31:53.275239 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" event={"ID":"a8029b26-0d9c-428e-af30-62c262f079f4","Type":"ContainerStarted","Data":"e43ba99d6fc8fee8b90df1da406a3e1ae58b5eafa353d6e2d8c38722a2acf27f"} Nov 27 17:31:53 crc kubenswrapper[4792]: I1127 17:31:53.275505 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" event={"ID":"a8029b26-0d9c-428e-af30-62c262f079f4","Type":"ContainerStarted","Data":"4f66d0480db31f9afbff0fee93aa346215f99ddd6031f07caee7b08201eff9ac"} Nov 27 17:31:53 crc kubenswrapper[4792]: I1127 17:31:53.279782 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-b9854ff99-9l5qq" event={"ID":"64269486-bbcb-49d2-ab84-0591965b9277","Type":"ContainerStarted","Data":"8bd6dc87f0590a34859d9f51f1bdb31dae1cb41768c0a081e73b69ccacc7ebda"} Nov 27 17:31:53 crc kubenswrapper[4792]: I1127 17:31:53.279841 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-b9854ff99-9l5qq" event={"ID":"64269486-bbcb-49d2-ab84-0591965b9277","Type":"ContainerStarted","Data":"5d1bb5d7eecf586771ff637f5eca0353d378747c0d878df9350479a6a02d28a7"} Nov 27 17:31:53 crc kubenswrapper[4792]: I1127 17:31:53.313417 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6ff478798d-8s6ww" podStartSLOduration=2.7689594140000002 podStartE2EDuration="6.313395777s" podCreationTimestamp="2025-11-27 17:31:47 +0000 UTC" firstStartedPulling="2025-11-27 17:31:48.725502557 +0000 UTC m=+1331.068328875" lastFinishedPulling="2025-11-27 17:31:52.26993892 +0000 UTC m=+1334.612765238" observedRunningTime="2025-11-27 17:31:53.298637679 +0000 UTC m=+1335.641464017" watchObservedRunningTime="2025-11-27 17:31:53.313395777 +0000 UTC m=+1335.656222115" Nov 27 17:31:53 crc kubenswrapper[4792]: I1127 17:31:53.329439 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-b9854ff99-9l5qq" podStartSLOduration=2.709559124 podStartE2EDuration="6.329419307s" podCreationTimestamp="2025-11-27 17:31:47 +0000 UTC" firstStartedPulling="2025-11-27 17:31:48.650384275 +0000 UTC m=+1330.993210593" lastFinishedPulling="2025-11-27 17:31:52.270244448 +0000 UTC m=+1334.613070776" observedRunningTime="2025-11-27 17:31:53.315378337 +0000 UTC m=+1335.658204655" watchObservedRunningTime="2025-11-27 17:31:53.329419307 +0000 UTC m=+1335.672245625" Nov 27 17:31:56 crc kubenswrapper[4792]: I1127 17:31:56.321066 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rr9ql" event={"ID":"1e3f2e74-1077-4a57-9851-1113b4a46729","Type":"ContainerDied","Data":"dcae7ef47fb8cc118531991899b8c60db8ae7e417ee23a148e0db01885b9e212"} Nov 27 17:31:56 crc kubenswrapper[4792]: I1127 17:31:56.321626 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcae7ef47fb8cc118531991899b8c60db8ae7e417ee23a148e0db01885b9e212" Nov 27 17:31:56 crc kubenswrapper[4792]: I1127 17:31:56.323315 4792 generic.go:334] "Generic (PLEG): container finished" podID="2f1c2409-1610-4ede-ab33-880b170c802f" containerID="be55da7b0d2e853f86ec64d21d10ff1467c329f2506bba53fbfc525ec773902f" exitCode=0 Nov 27 17:31:56 crc kubenswrapper[4792]: I1127 17:31:56.323386 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vcdv9" event={"ID":"2f1c2409-1610-4ede-ab33-880b170c802f","Type":"ContainerDied","Data":"be55da7b0d2e853f86ec64d21d10ff1467c329f2506bba53fbfc525ec773902f"} Nov 27 17:31:56 crc kubenswrapper[4792]: I1127 17:31:56.326596 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65f9f97c5d-544l8" event={"ID":"e0a4f95d-c1db-43ea-9d79-185c188a4f9b","Type":"ContainerStarted","Data":"fd71bfed36e6ad885600a58733d848dd457e9c7e5b036fe924f1575e1f17bdd2"} Nov 27 17:31:56 crc kubenswrapper[4792]: I1127 17:31:56.368386 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rr9ql" Nov 27 17:31:56 crc kubenswrapper[4792]: I1127 17:31:56.539758 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e3f2e74-1077-4a57-9851-1113b4a46729-config-data\") pod \"1e3f2e74-1077-4a57-9851-1113b4a46729\" (UID: \"1e3f2e74-1077-4a57-9851-1113b4a46729\") " Nov 27 17:31:56 crc kubenswrapper[4792]: I1127 17:31:56.539855 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e3f2e74-1077-4a57-9851-1113b4a46729-combined-ca-bundle\") pod \"1e3f2e74-1077-4a57-9851-1113b4a46729\" (UID: \"1e3f2e74-1077-4a57-9851-1113b4a46729\") " Nov 27 17:31:56 crc kubenswrapper[4792]: I1127 17:31:56.540134 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnrvw\" (UniqueName: \"kubernetes.io/projected/1e3f2e74-1077-4a57-9851-1113b4a46729-kube-api-access-pnrvw\") pod \"1e3f2e74-1077-4a57-9851-1113b4a46729\" (UID: \"1e3f2e74-1077-4a57-9851-1113b4a46729\") " Nov 27 17:31:56 crc kubenswrapper[4792]: I1127 17:31:56.547461 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e3f2e74-1077-4a57-9851-1113b4a46729-kube-api-access-pnrvw" (OuterVolumeSpecName: "kube-api-access-pnrvw") pod "1e3f2e74-1077-4a57-9851-1113b4a46729" (UID: "1e3f2e74-1077-4a57-9851-1113b4a46729"). InnerVolumeSpecName "kube-api-access-pnrvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:31:56 crc kubenswrapper[4792]: I1127 17:31:56.580496 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e3f2e74-1077-4a57-9851-1113b4a46729-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e3f2e74-1077-4a57-9851-1113b4a46729" (UID: "1e3f2e74-1077-4a57-9851-1113b4a46729"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:56 crc kubenswrapper[4792]: I1127 17:31:56.643782 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnrvw\" (UniqueName: \"kubernetes.io/projected/1e3f2e74-1077-4a57-9851-1113b4a46729-kube-api-access-pnrvw\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:56 crc kubenswrapper[4792]: I1127 17:31:56.643828 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e3f2e74-1077-4a57-9851-1113b4a46729-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:56 crc kubenswrapper[4792]: I1127 17:31:56.643859 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e3f2e74-1077-4a57-9851-1113b4a46729-config-data" (OuterVolumeSpecName: "config-data") pod "1e3f2e74-1077-4a57-9851-1113b4a46729" (UID: "1e3f2e74-1077-4a57-9851-1113b4a46729"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:56 crc kubenswrapper[4792]: I1127 17:31:56.749095 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e3f2e74-1077-4a57-9851-1113b4a46729-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:57 crc kubenswrapper[4792]: E1127 17:31:57.048148 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="3efaa573-2d1c-4668-a6bc-b50aa892a299" Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.338571 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65f9f97c5d-544l8" event={"ID":"e0a4f95d-c1db-43ea-9d79-185c188a4f9b","Type":"ContainerStarted","Data":"0005dd88bd459ec952360835202a540a9896b723208884540f90bb1ddfa5d614"} Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.338619 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-65f9f97c5d-544l8" event={"ID":"e0a4f95d-c1db-43ea-9d79-185c188a4f9b","Type":"ContainerStarted","Data":"be0ae4dc7ce356249301a18ca681da5f1374c0fd931e802cc03bf494b86f8782"} Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.339985 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.340078 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.344727 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3efaa573-2d1c-4668-a6bc-b50aa892a299" containerName="ceilometer-notification-agent" containerID="cri-o://1fa04bca7b7b67f52cacec4043f6fe71696ef5ffde7c1d1171253a1af71444bf" gracePeriod=30 Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.344981 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3efaa573-2d1c-4668-a6bc-b50aa892a299","Type":"ContainerStarted","Data":"7537015d9261e57114be1f9b956a4673786562a4a646285747d9069c782d5de7"} Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.345046 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rr9ql" Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.345832 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.345886 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3efaa573-2d1c-4668-a6bc-b50aa892a299" containerName="proxy-httpd" containerID="cri-o://7537015d9261e57114be1f9b956a4673786562a4a646285747d9069c782d5de7" gracePeriod=30 Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.345944 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3efaa573-2d1c-4668-a6bc-b50aa892a299" containerName="sg-core" containerID="cri-o://2a8e7214f731a7ff19ca3cf44a30b1a69708bebf7b52fc93e4ad6171a24b1fd9" gracePeriod=30 Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.368037 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-65f9f97c5d-544l8" podStartSLOduration=7.368012846 podStartE2EDuration="7.368012846s" podCreationTimestamp="2025-11-27 17:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:31:57.365009671 +0000 UTC m=+1339.707836029" watchObservedRunningTime="2025-11-27 17:31:57.368012846 +0000 UTC m=+1339.710839164" Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.763910 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.850592 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-9rdwk"] Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.850830 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" podUID="7b749958-4e65-4302-a52d-0a1ec095ba72" containerName="dnsmasq-dns" containerID="cri-o://a12eb15d3a62410218e76365d28b9a7c1f379b12621618a975e7ba013b36545a" gracePeriod=10 Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.855213 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.988726 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-db-sync-config-data\") pod \"2f1c2409-1610-4ede-ab33-880b170c802f\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.988768 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg2wg\" (UniqueName: \"kubernetes.io/projected/2f1c2409-1610-4ede-ab33-880b170c802f-kube-api-access-sg2wg\") pod \"2f1c2409-1610-4ede-ab33-880b170c802f\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.988910 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-combined-ca-bundle\") pod \"2f1c2409-1610-4ede-ab33-880b170c802f\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.989059 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f1c2409-1610-4ede-ab33-880b170c802f-etc-machine-id\") pod \"2f1c2409-1610-4ede-ab33-880b170c802f\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.989149 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f1c2409-1610-4ede-ab33-880b170c802f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2f1c2409-1610-4ede-ab33-880b170c802f" (UID: "2f1c2409-1610-4ede-ab33-880b170c802f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.989180 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-config-data\") pod \"2f1c2409-1610-4ede-ab33-880b170c802f\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.989279 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-scripts\") pod \"2f1c2409-1610-4ede-ab33-880b170c802f\" (UID: \"2f1c2409-1610-4ede-ab33-880b170c802f\") " Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.990088 4792 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f1c2409-1610-4ede-ab33-880b170c802f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:57 crc kubenswrapper[4792]: I1127 17:31:57.997890 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f1c2409-1610-4ede-ab33-880b170c802f-kube-api-access-sg2wg" (OuterVolumeSpecName: "kube-api-access-sg2wg") pod "2f1c2409-1610-4ede-ab33-880b170c802f" (UID: "2f1c2409-1610-4ede-ab33-880b170c802f"). InnerVolumeSpecName "kube-api-access-sg2wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.008538 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2f1c2409-1610-4ede-ab33-880b170c802f" (UID: "2f1c2409-1610-4ede-ab33-880b170c802f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.011958 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-scripts" (OuterVolumeSpecName: "scripts") pod "2f1c2409-1610-4ede-ab33-880b170c802f" (UID: "2f1c2409-1610-4ede-ab33-880b170c802f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.038415 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f1c2409-1610-4ede-ab33-880b170c802f" (UID: "2f1c2409-1610-4ede-ab33-880b170c802f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.089896 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-config-data" (OuterVolumeSpecName: "config-data") pod "2f1c2409-1610-4ede-ab33-880b170c802f" (UID: "2f1c2409-1610-4ede-ab33-880b170c802f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.092518 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.092540 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.092549 4792 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.092561 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg2wg\" (UniqueName: \"kubernetes.io/projected/2f1c2409-1610-4ede-ab33-880b170c802f-kube-api-access-sg2wg\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.092570 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f1c2409-1610-4ede-ab33-880b170c802f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.364158 4792 generic.go:334] "Generic (PLEG): container finished" podID="3efaa573-2d1c-4668-a6bc-b50aa892a299" containerID="7537015d9261e57114be1f9b956a4673786562a4a646285747d9069c782d5de7" exitCode=0 Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.364436 4792 generic.go:334] "Generic (PLEG): container finished" podID="3efaa573-2d1c-4668-a6bc-b50aa892a299" containerID="2a8e7214f731a7ff19ca3cf44a30b1a69708bebf7b52fc93e4ad6171a24b1fd9" exitCode=2 Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.364480 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3efaa573-2d1c-4668-a6bc-b50aa892a299","Type":"ContainerDied","Data":"7537015d9261e57114be1f9b956a4673786562a4a646285747d9069c782d5de7"} Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.364506 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3efaa573-2d1c-4668-a6bc-b50aa892a299","Type":"ContainerDied","Data":"2a8e7214f731a7ff19ca3cf44a30b1a69708bebf7b52fc93e4ad6171a24b1fd9"} Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.369986 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vcdv9" event={"ID":"2f1c2409-1610-4ede-ab33-880b170c802f","Type":"ContainerDied","Data":"8f69fbb18753af4ade9673e29af17319d8e17d1c7de2bf7429b5b6599ed68001"} Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.370028 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f69fbb18753af4ade9673e29af17319d8e17d1c7de2bf7429b5b6599ed68001" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.370115 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vcdv9" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.394219 4792 generic.go:334] "Generic (PLEG): container finished" podID="7b749958-4e65-4302-a52d-0a1ec095ba72" containerID="a12eb15d3a62410218e76365d28b9a7c1f379b12621618a975e7ba013b36545a" exitCode=0 Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.395212 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" event={"ID":"7b749958-4e65-4302-a52d-0a1ec095ba72","Type":"ContainerDied","Data":"a12eb15d3a62410218e76365d28b9a7c1f379b12621618a975e7ba013b36545a"} Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.512468 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.590480 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 27 17:31:58 crc kubenswrapper[4792]: E1127 17:31:58.591035 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f1c2409-1610-4ede-ab33-880b170c802f" containerName="cinder-db-sync" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.591056 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f1c2409-1610-4ede-ab33-880b170c802f" containerName="cinder-db-sync" Nov 27 17:31:58 crc kubenswrapper[4792]: E1127 17:31:58.591080 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b749958-4e65-4302-a52d-0a1ec095ba72" containerName="dnsmasq-dns" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.591090 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b749958-4e65-4302-a52d-0a1ec095ba72" containerName="dnsmasq-dns" Nov 27 17:31:58 crc kubenswrapper[4792]: E1127 17:31:58.591143 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b749958-4e65-4302-a52d-0a1ec095ba72" containerName="init" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.591151 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b749958-4e65-4302-a52d-0a1ec095ba72" containerName="init" Nov 27 17:31:58 crc kubenswrapper[4792]: E1127 17:31:58.591162 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e3f2e74-1077-4a57-9851-1113b4a46729" containerName="heat-db-sync" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.591169 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e3f2e74-1077-4a57-9851-1113b4a46729" containerName="heat-db-sync" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.591416 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b749958-4e65-4302-a52d-0a1ec095ba72" containerName="dnsmasq-dns" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.591443 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f1c2409-1610-4ede-ab33-880b170c802f" containerName="cinder-db-sync" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.591457 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e3f2e74-1077-4a57-9851-1113b4a46729" containerName="heat-db-sync" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.592797 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.596119 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.596572 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.596761 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-5lcgs" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.596772 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.607584 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-ovsdbserver-sb\") pod \"7b749958-4e65-4302-a52d-0a1ec095ba72\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.607627 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-config\") pod \"7b749958-4e65-4302-a52d-0a1ec095ba72\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.607692 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfjfd\" (UniqueName: \"kubernetes.io/projected/7b749958-4e65-4302-a52d-0a1ec095ba72-kube-api-access-dfjfd\") pod \"7b749958-4e65-4302-a52d-0a1ec095ba72\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.607728 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-ovsdbserver-nb\") pod \"7b749958-4e65-4302-a52d-0a1ec095ba72\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.607785 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-dns-swift-storage-0\") pod \"7b749958-4e65-4302-a52d-0a1ec095ba72\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.607883 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-dns-svc\") pod \"7b749958-4e65-4302-a52d-0a1ec095ba72\" (UID: \"7b749958-4e65-4302-a52d-0a1ec095ba72\") " Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.614872 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.640269 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b749958-4e65-4302-a52d-0a1ec095ba72-kube-api-access-dfjfd" (OuterVolumeSpecName: "kube-api-access-dfjfd") pod "7b749958-4e65-4302-a52d-0a1ec095ba72" (UID: "7b749958-4e65-4302-a52d-0a1ec095ba72"). InnerVolumeSpecName "kube-api-access-dfjfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.717047 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5c6d\" (UniqueName: \"kubernetes.io/projected/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-kube-api-access-x5c6d\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.717677 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-config-data\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.717873 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.718114 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-scripts\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.718290 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.718513 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.718712 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfjfd\" (UniqueName: \"kubernetes.io/projected/7b749958-4e65-4302-a52d-0a1ec095ba72-kube-api-access-dfjfd\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.764920 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7b749958-4e65-4302-a52d-0a1ec095ba72" (UID: "7b749958-4e65-4302-a52d-0a1ec095ba72"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.782042 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-62fkd"] Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.784512 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.793184 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7b749958-4e65-4302-a52d-0a1ec095ba72" (UID: "7b749958-4e65-4302-a52d-0a1ec095ba72"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.817768 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-62fkd"] Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.820170 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5c6d\" (UniqueName: \"kubernetes.io/projected/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-kube-api-access-x5c6d\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.820211 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-config-data\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.820290 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.820373 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-scripts\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.820412 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.820516 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.820600 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.820617 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.821981 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.824935 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7b749958-4e65-4302-a52d-0a1ec095ba72" (UID: "7b749958-4e65-4302-a52d-0a1ec095ba72"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.832484 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.842125 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-config-data\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.845453 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.849836 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-scripts\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.859571 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5c6d\" (UniqueName: \"kubernetes.io/projected/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-kube-api-access-x5c6d\") pod \"cinder-scheduler-0\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " pod="openstack/cinder-scheduler-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.876268 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7b749958-4e65-4302-a52d-0a1ec095ba72" (UID: "7b749958-4e65-4302-a52d-0a1ec095ba72"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.913366 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.915421 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.919137 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.926687 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.926757 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.926800 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.926829 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-config\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.926874 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.926890 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t56tw\" (UniqueName: \"kubernetes.io/projected/9f38ffc0-9dc6-485a-835f-5d038444fa07-kube-api-access-t56tw\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.927017 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.927028 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.952284 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 27 17:31:58 crc kubenswrapper[4792]: I1127 17:31:58.964362 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-config" (OuterVolumeSpecName: "config") pod "7b749958-4e65-4302-a52d-0a1ec095ba72" (UID: "7b749958-4e65-4302-a52d-0a1ec095ba72"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.029200 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt7jf\" (UniqueName: \"kubernetes.io/projected/a9bc826f-d192-4613-a37b-952cadcefbb7-kube-api-access-mt7jf\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.029279 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.029315 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-config\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.029349 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.029369 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t56tw\" (UniqueName: \"kubernetes.io/projected/9f38ffc0-9dc6-485a-835f-5d038444fa07-kube-api-access-t56tw\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.029396 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.029425 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9bc826f-d192-4613-a37b-952cadcefbb7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.029439 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9bc826f-d192-4613-a37b-952cadcefbb7-logs\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.029505 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-scripts\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.029551 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.029571 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-config-data-custom\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.029603 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.029621 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-config-data\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.029701 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b749958-4e65-4302-a52d-0a1ec095ba72-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.030172 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-config\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.030317 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.030363 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.030806 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.030997 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.031012 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.051426 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t56tw\" (UniqueName: \"kubernetes.io/projected/9f38ffc0-9dc6-485a-835f-5d038444fa07-kube-api-access-t56tw\") pod \"dnsmasq-dns-6bb4fc677f-62fkd\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.138815 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-scripts\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.139018 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-config-data-custom\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.139103 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-config-data\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.139163 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt7jf\" (UniqueName: \"kubernetes.io/projected/a9bc826f-d192-4613-a37b-952cadcefbb7-kube-api-access-mt7jf\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.140151 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.140232 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9bc826f-d192-4613-a37b-952cadcefbb7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.140263 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9bc826f-d192-4613-a37b-952cadcefbb7-logs\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.141183 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9bc826f-d192-4613-a37b-952cadcefbb7-logs\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.146268 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-config-data\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.146396 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9bc826f-d192-4613-a37b-952cadcefbb7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.153685 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-scripts\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.160883 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.161003 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-config-data-custom\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.163348 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.168833 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt7jf\" (UniqueName: \"kubernetes.io/projected/a9bc826f-d192-4613-a37b-952cadcefbb7-kube-api-access-mt7jf\") pod \"cinder-api-0\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.246558 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.410794 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" event={"ID":"7b749958-4e65-4302-a52d-0a1ec095ba72","Type":"ContainerDied","Data":"01a6c3d662a43ba8d3c8b42abcb3e81f9e1d199c6412a15f3cebdab97b055df3"} Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.410959 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-9rdwk" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.411073 4792 scope.go:117] "RemoveContainer" containerID="a12eb15d3a62410218e76365d28b9a7c1f379b12621618a975e7ba013b36545a" Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.452353 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-9rdwk"] Nov 27 17:31:59 crc kubenswrapper[4792]: I1127 17:31:59.460361 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-9rdwk"] Nov 27 17:32:00 crc kubenswrapper[4792]: I1127 17:32:00.311025 4792 scope.go:117] "RemoveContainer" containerID="0255dea69c99c7bb02be61da15ea9700c47a9a169c45ef1d66f0bae5e682176c" Nov 27 17:32:00 crc kubenswrapper[4792]: I1127 17:32:00.499885 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 27 17:32:00 crc kubenswrapper[4792]: I1127 17:32:00.507105 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bf4b697fd-b9sd9" podUID="13220316-7055-49fc-9b5b-747155332282" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:32:00 crc kubenswrapper[4792]: I1127 17:32:00.524617 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6bf4b697fd-b9sd9" podUID="13220316-7055-49fc-9b5b-747155332282" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:32:00 crc kubenswrapper[4792]: I1127 17:32:00.531684 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bf4b697fd-b9sd9" podUID="13220316-7055-49fc-9b5b-747155332282" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 27 17:32:00 crc kubenswrapper[4792]: I1127 17:32:00.709886 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b749958-4e65-4302-a52d-0a1ec095ba72" path="/var/lib/kubelet/pods/7b749958-4e65-4302-a52d-0a1ec095ba72/volumes" Nov 27 17:32:00 crc kubenswrapper[4792]: I1127 17:32:00.956538 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.147010 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.240005 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.370228 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-62fkd"] Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.403972 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3efaa573-2d1c-4668-a6bc-b50aa892a299-log-httpd\") pod \"3efaa573-2d1c-4668-a6bc-b50aa892a299\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.404285 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-config-data\") pod \"3efaa573-2d1c-4668-a6bc-b50aa892a299\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.404305 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thcrd\" (UniqueName: \"kubernetes.io/projected/3efaa573-2d1c-4668-a6bc-b50aa892a299-kube-api-access-thcrd\") pod \"3efaa573-2d1c-4668-a6bc-b50aa892a299\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.404326 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-sg-core-conf-yaml\") pod \"3efaa573-2d1c-4668-a6bc-b50aa892a299\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.404360 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-scripts\") pod \"3efaa573-2d1c-4668-a6bc-b50aa892a299\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.404395 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3efaa573-2d1c-4668-a6bc-b50aa892a299-run-httpd\") pod \"3efaa573-2d1c-4668-a6bc-b50aa892a299\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.404527 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-combined-ca-bundle\") pod \"3efaa573-2d1c-4668-a6bc-b50aa892a299\" (UID: \"3efaa573-2d1c-4668-a6bc-b50aa892a299\") " Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.404597 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3efaa573-2d1c-4668-a6bc-b50aa892a299-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3efaa573-2d1c-4668-a6bc-b50aa892a299" (UID: "3efaa573-2d1c-4668-a6bc-b50aa892a299"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.404950 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3efaa573-2d1c-4668-a6bc-b50aa892a299-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.405776 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3efaa573-2d1c-4668-a6bc-b50aa892a299-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3efaa573-2d1c-4668-a6bc-b50aa892a299" (UID: "3efaa573-2d1c-4668-a6bc-b50aa892a299"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.409626 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3efaa573-2d1c-4668-a6bc-b50aa892a299-kube-api-access-thcrd" (OuterVolumeSpecName: "kube-api-access-thcrd") pod "3efaa573-2d1c-4668-a6bc-b50aa892a299" (UID: "3efaa573-2d1c-4668-a6bc-b50aa892a299"). InnerVolumeSpecName "kube-api-access-thcrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.433816 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-scripts" (OuterVolumeSpecName: "scripts") pod "3efaa573-2d1c-4668-a6bc-b50aa892a299" (UID: "3efaa573-2d1c-4668-a6bc-b50aa892a299"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.459184 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3efaa573-2d1c-4668-a6bc-b50aa892a299" (UID: "3efaa573-2d1c-4668-a6bc-b50aa892a299"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.505327 4792 generic.go:334] "Generic (PLEG): container finished" podID="3efaa573-2d1c-4668-a6bc-b50aa892a299" containerID="1fa04bca7b7b67f52cacec4043f6fe71696ef5ffde7c1d1171253a1af71444bf" exitCode=0 Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.505415 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3efaa573-2d1c-4668-a6bc-b50aa892a299","Type":"ContainerDied","Data":"1fa04bca7b7b67f52cacec4043f6fe71696ef5ffde7c1d1171253a1af71444bf"} Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.505450 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3efaa573-2d1c-4668-a6bc-b50aa892a299","Type":"ContainerDied","Data":"255890a81f6063f9140a329353dcc03e68889524b7d104e7c2f4c5862c987da9"} Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.505471 4792 scope.go:117] "RemoveContainer" containerID="7537015d9261e57114be1f9b956a4673786562a4a646285747d9069c782d5de7" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.505406 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.509924 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" event={"ID":"9f38ffc0-9dc6-485a-835f-5d038444fa07","Type":"ContainerStarted","Data":"91e6f99403948ec8f258104ce1492e67b30a7a14e4dbd220e59eed7c9cfcf21b"} Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.512308 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a9bc826f-d192-4613-a37b-952cadcefbb7","Type":"ContainerStarted","Data":"2f65b4863f918c4ad7cbedbce401845830706a2c09d178e6d5f407c37044ae40"} Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.515099 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thcrd\" (UniqueName: \"kubernetes.io/projected/3efaa573-2d1c-4668-a6bc-b50aa892a299-kube-api-access-thcrd\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.515113 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.515121 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.515130 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3efaa573-2d1c-4668-a6bc-b50aa892a299-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.516893 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dff0fe27-95c6-4685-8ac5-dd07fe300f3a","Type":"ContainerStarted","Data":"3602f142c2dab023f5038035c264848459e7685836ee19bd507887165fcdd1ab"} Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.522211 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3efaa573-2d1c-4668-a6bc-b50aa892a299" (UID: "3efaa573-2d1c-4668-a6bc-b50aa892a299"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.553795 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-config-data" (OuterVolumeSpecName: "config-data") pod "3efaa573-2d1c-4668-a6bc-b50aa892a299" (UID: "3efaa573-2d1c-4668-a6bc-b50aa892a299"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.614744 4792 scope.go:117] "RemoveContainer" containerID="2a8e7214f731a7ff19ca3cf44a30b1a69708bebf7b52fc93e4ad6171a24b1fd9" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.617831 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.617952 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3efaa573-2d1c-4668-a6bc-b50aa892a299-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.652004 4792 scope.go:117] "RemoveContainer" containerID="1fa04bca7b7b67f52cacec4043f6fe71696ef5ffde7c1d1171253a1af71444bf" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.793561 4792 scope.go:117] "RemoveContainer" containerID="7537015d9261e57114be1f9b956a4673786562a4a646285747d9069c782d5de7" Nov 27 17:32:01 crc kubenswrapper[4792]: E1127 17:32:01.794993 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7537015d9261e57114be1f9b956a4673786562a4a646285747d9069c782d5de7\": container with ID starting with 7537015d9261e57114be1f9b956a4673786562a4a646285747d9069c782d5de7 not found: ID does not exist" containerID="7537015d9261e57114be1f9b956a4673786562a4a646285747d9069c782d5de7" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.795122 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7537015d9261e57114be1f9b956a4673786562a4a646285747d9069c782d5de7"} err="failed to get container status \"7537015d9261e57114be1f9b956a4673786562a4a646285747d9069c782d5de7\": rpc error: code = NotFound desc = could not find container \"7537015d9261e57114be1f9b956a4673786562a4a646285747d9069c782d5de7\": container with ID starting with 7537015d9261e57114be1f9b956a4673786562a4a646285747d9069c782d5de7 not found: ID does not exist" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.795228 4792 scope.go:117] "RemoveContainer" containerID="2a8e7214f731a7ff19ca3cf44a30b1a69708bebf7b52fc93e4ad6171a24b1fd9" Nov 27 17:32:01 crc kubenswrapper[4792]: E1127 17:32:01.795875 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a8e7214f731a7ff19ca3cf44a30b1a69708bebf7b52fc93e4ad6171a24b1fd9\": container with ID starting with 2a8e7214f731a7ff19ca3cf44a30b1a69708bebf7b52fc93e4ad6171a24b1fd9 not found: ID does not exist" containerID="2a8e7214f731a7ff19ca3cf44a30b1a69708bebf7b52fc93e4ad6171a24b1fd9" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.795934 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a8e7214f731a7ff19ca3cf44a30b1a69708bebf7b52fc93e4ad6171a24b1fd9"} err="failed to get container status \"2a8e7214f731a7ff19ca3cf44a30b1a69708bebf7b52fc93e4ad6171a24b1fd9\": rpc error: code = NotFound desc = could not find container \"2a8e7214f731a7ff19ca3cf44a30b1a69708bebf7b52fc93e4ad6171a24b1fd9\": container with ID starting with 2a8e7214f731a7ff19ca3cf44a30b1a69708bebf7b52fc93e4ad6171a24b1fd9 not found: ID does not exist" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.795961 4792 scope.go:117] "RemoveContainer" containerID="1fa04bca7b7b67f52cacec4043f6fe71696ef5ffde7c1d1171253a1af71444bf" Nov 27 17:32:01 crc kubenswrapper[4792]: E1127 17:32:01.796448 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fa04bca7b7b67f52cacec4043f6fe71696ef5ffde7c1d1171253a1af71444bf\": container with ID starting with 1fa04bca7b7b67f52cacec4043f6fe71696ef5ffde7c1d1171253a1af71444bf not found: ID does not exist" containerID="1fa04bca7b7b67f52cacec4043f6fe71696ef5ffde7c1d1171253a1af71444bf" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.796479 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fa04bca7b7b67f52cacec4043f6fe71696ef5ffde7c1d1171253a1af71444bf"} err="failed to get container status \"1fa04bca7b7b67f52cacec4043f6fe71696ef5ffde7c1d1171253a1af71444bf\": rpc error: code = NotFound desc = could not find container \"1fa04bca7b7b67f52cacec4043f6fe71696ef5ffde7c1d1171253a1af71444bf\": container with ID starting with 1fa04bca7b7b67f52cacec4043f6fe71696ef5ffde7c1d1171253a1af71444bf not found: ID does not exist" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.929854 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.973201 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.985548 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:01 crc kubenswrapper[4792]: E1127 17:32:01.986045 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3efaa573-2d1c-4668-a6bc-b50aa892a299" containerName="proxy-httpd" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.986063 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efaa573-2d1c-4668-a6bc-b50aa892a299" containerName="proxy-httpd" Nov 27 17:32:01 crc kubenswrapper[4792]: E1127 17:32:01.986092 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3efaa573-2d1c-4668-a6bc-b50aa892a299" containerName="sg-core" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.986099 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efaa573-2d1c-4668-a6bc-b50aa892a299" containerName="sg-core" Nov 27 17:32:01 crc kubenswrapper[4792]: E1127 17:32:01.986111 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3efaa573-2d1c-4668-a6bc-b50aa892a299" containerName="ceilometer-notification-agent" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.986117 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efaa573-2d1c-4668-a6bc-b50aa892a299" containerName="ceilometer-notification-agent" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.986314 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3efaa573-2d1c-4668-a6bc-b50aa892a299" containerName="ceilometer-notification-agent" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.986337 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3efaa573-2d1c-4668-a6bc-b50aa892a299" containerName="proxy-httpd" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.986363 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3efaa573-2d1c-4668-a6bc-b50aa892a299" containerName="sg-core" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.988325 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.990331 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.990522 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 27 17:32:01 crc kubenswrapper[4792]: I1127 17:32:01.995848 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.046936 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.047007 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.047038 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-scripts\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.047071 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr4p8\" (UniqueName: \"kubernetes.io/projected/385b9d5c-07c2-40b5-99c2-6176a9611572-kube-api-access-gr4p8\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.047192 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-config-data\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.047232 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/385b9d5c-07c2-40b5-99c2-6176a9611572-log-httpd\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.047266 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/385b9d5c-07c2-40b5-99c2-6176a9611572-run-httpd\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.148585 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.148670 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.148696 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-scripts\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.148731 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr4p8\" (UniqueName: \"kubernetes.io/projected/385b9d5c-07c2-40b5-99c2-6176a9611572-kube-api-access-gr4p8\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.148846 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-config-data\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.148884 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/385b9d5c-07c2-40b5-99c2-6176a9611572-log-httpd\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.148913 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/385b9d5c-07c2-40b5-99c2-6176a9611572-run-httpd\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.149473 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/385b9d5c-07c2-40b5-99c2-6176a9611572-run-httpd\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.151602 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/385b9d5c-07c2-40b5-99c2-6176a9611572-log-httpd\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.154170 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.154872 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.155539 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-scripts\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.155716 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-config-data\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.166615 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr4p8\" (UniqueName: \"kubernetes.io/projected/385b9d5c-07c2-40b5-99c2-6176a9611572-kube-api-access-gr4p8\") pod \"ceilometer-0\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.308591 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.533019 4792 generic.go:334] "Generic (PLEG): container finished" podID="9f38ffc0-9dc6-485a-835f-5d038444fa07" containerID="a20bce2a4495f2a0e91585b78eabb50e6832af3b03208ac1472e5f3b4b04cea7" exitCode=0 Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.533584 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" event={"ID":"9f38ffc0-9dc6-485a-835f-5d038444fa07","Type":"ContainerDied","Data":"a20bce2a4495f2a0e91585b78eabb50e6832af3b03208ac1472e5f3b4b04cea7"} Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.558004 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a9bc826f-d192-4613-a37b-952cadcefbb7","Type":"ContainerStarted","Data":"a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483"} Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.700993 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3efaa573-2d1c-4668-a6bc-b50aa892a299" path="/var/lib/kubelet/pods/3efaa573-2d1c-4668-a6bc-b50aa892a299/volumes" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.818815 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.823620 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:32:02 crc kubenswrapper[4792]: I1127 17:32:02.841413 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:03 crc kubenswrapper[4792]: I1127 17:32:03.591999 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"385b9d5c-07c2-40b5-99c2-6176a9611572","Type":"ContainerStarted","Data":"a20121fbfcfa5a08deefa59baca008ad54bcc8fea4f5936f36adb86333235a45"} Nov 27 17:32:03 crc kubenswrapper[4792]: I1127 17:32:03.598947 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a9bc826f-d192-4613-a37b-952cadcefbb7","Type":"ContainerStarted","Data":"38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d"} Nov 27 17:32:03 crc kubenswrapper[4792]: I1127 17:32:03.603841 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a9bc826f-d192-4613-a37b-952cadcefbb7" containerName="cinder-api-log" containerID="cri-o://a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483" gracePeriod=30 Nov 27 17:32:03 crc kubenswrapper[4792]: I1127 17:32:03.604419 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 27 17:32:03 crc kubenswrapper[4792]: I1127 17:32:03.604722 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a9bc826f-d192-4613-a37b-952cadcefbb7" containerName="cinder-api" containerID="cri-o://38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d" gracePeriod=30 Nov 27 17:32:03 crc kubenswrapper[4792]: I1127 17:32:03.614211 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dff0fe27-95c6-4685-8ac5-dd07fe300f3a","Type":"ContainerStarted","Data":"c78cc8c4f3d83e9dd89ae0f85aaba155ff6bd6c72120535b7af4a6b24a23de92"} Nov 27 17:32:03 crc kubenswrapper[4792]: I1127 17:32:03.635337 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" event={"ID":"9f38ffc0-9dc6-485a-835f-5d038444fa07","Type":"ContainerStarted","Data":"2a4e10324be629cca9109f608bad0d982ac4c2ce4b73c0a6ca33fa53a6b66de3"} Nov 27 17:32:03 crc kubenswrapper[4792]: I1127 17:32:03.638373 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:32:03 crc kubenswrapper[4792]: I1127 17:32:03.667746 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.667718622 podStartE2EDuration="5.667718622s" podCreationTimestamp="2025-11-27 17:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:32:03.624606868 +0000 UTC m=+1345.967433186" watchObservedRunningTime="2025-11-27 17:32:03.667718622 +0000 UTC m=+1346.010544940" Nov 27 17:32:03 crc kubenswrapper[4792]: I1127 17:32:03.716885 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.87323795 podStartE2EDuration="5.716865237s" podCreationTimestamp="2025-11-27 17:31:58 +0000 UTC" firstStartedPulling="2025-11-27 17:32:00.993735035 +0000 UTC m=+1343.336561353" lastFinishedPulling="2025-11-27 17:32:01.837362322 +0000 UTC m=+1344.180188640" observedRunningTime="2025-11-27 17:32:03.652028961 +0000 UTC m=+1345.994855299" watchObservedRunningTime="2025-11-27 17:32:03.716865237 +0000 UTC m=+1346.059691555" Nov 27 17:32:03 crc kubenswrapper[4792]: I1127 17:32:03.738114 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" podStartSLOduration=5.738088906 podStartE2EDuration="5.738088906s" podCreationTimestamp="2025-11-27 17:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:32:03.673958138 +0000 UTC m=+1346.016784456" watchObservedRunningTime="2025-11-27 17:32:03.738088906 +0000 UTC m=+1346.080915214" Nov 27 17:32:03 crc kubenswrapper[4792]: I1127 17:32:03.835592 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.036298 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.515009 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.652732 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-combined-ca-bundle\") pod \"a9bc826f-d192-4613-a37b-952cadcefbb7\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.653185 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-config-data\") pod \"a9bc826f-d192-4613-a37b-952cadcefbb7\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.653220 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-scripts\") pod \"a9bc826f-d192-4613-a37b-952cadcefbb7\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.653251 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9bc826f-d192-4613-a37b-952cadcefbb7-etc-machine-id\") pod \"a9bc826f-d192-4613-a37b-952cadcefbb7\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.653276 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt7jf\" (UniqueName: \"kubernetes.io/projected/a9bc826f-d192-4613-a37b-952cadcefbb7-kube-api-access-mt7jf\") pod \"a9bc826f-d192-4613-a37b-952cadcefbb7\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.653351 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9bc826f-d192-4613-a37b-952cadcefbb7-logs\") pod \"a9bc826f-d192-4613-a37b-952cadcefbb7\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.653376 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-config-data-custom\") pod \"a9bc826f-d192-4613-a37b-952cadcefbb7\" (UID: \"a9bc826f-d192-4613-a37b-952cadcefbb7\") " Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.664638 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9bc826f-d192-4613-a37b-952cadcefbb7-logs" (OuterVolumeSpecName: "logs") pod "a9bc826f-d192-4613-a37b-952cadcefbb7" (UID: "a9bc826f-d192-4613-a37b-952cadcefbb7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.664749 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9bc826f-d192-4613-a37b-952cadcefbb7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a9bc826f-d192-4613-a37b-952cadcefbb7" (UID: "a9bc826f-d192-4613-a37b-952cadcefbb7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.675584 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"385b9d5c-07c2-40b5-99c2-6176a9611572","Type":"ContainerStarted","Data":"ad401a8d08e41e8e46cd359acbfea5359b344bb5da8af447810b4a705854c12c"} Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.682712 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-scripts" (OuterVolumeSpecName: "scripts") pod "a9bc826f-d192-4613-a37b-952cadcefbb7" (UID: "a9bc826f-d192-4613-a37b-952cadcefbb7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.682754 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a9bc826f-d192-4613-a37b-952cadcefbb7" (UID: "a9bc826f-d192-4613-a37b-952cadcefbb7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.684132 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9bc826f-d192-4613-a37b-952cadcefbb7-kube-api-access-mt7jf" (OuterVolumeSpecName: "kube-api-access-mt7jf") pod "a9bc826f-d192-4613-a37b-952cadcefbb7" (UID: "a9bc826f-d192-4613-a37b-952cadcefbb7"). InnerVolumeSpecName "kube-api-access-mt7jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.685109 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9bc826f-d192-4613-a37b-952cadcefbb7" containerID="38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d" exitCode=0 Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.685244 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9bc826f-d192-4613-a37b-952cadcefbb7" containerID="a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483" exitCode=143 Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.685394 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a9bc826f-d192-4613-a37b-952cadcefbb7","Type":"ContainerDied","Data":"38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d"} Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.685527 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a9bc826f-d192-4613-a37b-952cadcefbb7","Type":"ContainerDied","Data":"a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483"} Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.685622 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a9bc826f-d192-4613-a37b-952cadcefbb7","Type":"ContainerDied","Data":"2f65b4863f918c4ad7cbedbce401845830706a2c09d178e6d5f407c37044ae40"} Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.685795 4792 scope.go:117] "RemoveContainer" containerID="38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.686080 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.733881 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9bc826f-d192-4613-a37b-952cadcefbb7" (UID: "a9bc826f-d192-4613-a37b-952cadcefbb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.742201 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dff0fe27-95c6-4685-8ac5-dd07fe300f3a","Type":"ContainerStarted","Data":"322b92ecf7a654357f82f509d317fba9b2f6f031b0de155b701cb1b048735fc6"} Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.760282 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt7jf\" (UniqueName: \"kubernetes.io/projected/a9bc826f-d192-4613-a37b-952cadcefbb7-kube-api-access-mt7jf\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.760317 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9bc826f-d192-4613-a37b-952cadcefbb7-logs\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.760327 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.760335 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.760343 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.760352 4792 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9bc826f-d192-4613-a37b-952cadcefbb7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.761358 4792 scope.go:117] "RemoveContainer" containerID="a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.772722 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-config-data" (OuterVolumeSpecName: "config-data") pod "a9bc826f-d192-4613-a37b-952cadcefbb7" (UID: "a9bc826f-d192-4613-a37b-952cadcefbb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.789456 4792 scope.go:117] "RemoveContainer" containerID="38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d" Nov 27 17:32:04 crc kubenswrapper[4792]: E1127 17:32:04.792120 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d\": container with ID starting with 38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d not found: ID does not exist" containerID="38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.792178 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d"} err="failed to get container status \"38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d\": rpc error: code = NotFound desc = could not find container \"38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d\": container with ID starting with 38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d not found: ID does not exist" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.792208 4792 scope.go:117] "RemoveContainer" containerID="a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483" Nov 27 17:32:04 crc kubenswrapper[4792]: E1127 17:32:04.793064 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483\": container with ID starting with a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483 not found: ID does not exist" containerID="a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.793181 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483"} err="failed to get container status \"a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483\": rpc error: code = NotFound desc = could not find container \"a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483\": container with ID starting with a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483 not found: ID does not exist" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.793282 4792 scope.go:117] "RemoveContainer" containerID="38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.794155 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d"} err="failed to get container status \"38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d\": rpc error: code = NotFound desc = could not find container \"38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d\": container with ID starting with 38f63bd747f9b57d2d749b6cc1fe67f9ec9559de373e47f5dbbaf1704c13d89d not found: ID does not exist" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.794185 4792 scope.go:117] "RemoveContainer" containerID="a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.795099 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483"} err="failed to get container status \"a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483\": rpc error: code = NotFound desc = could not find container \"a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483\": container with ID starting with a1121499d02fa277079b9c5a650ee9bd8072a54d57c72f2cfba0b975aba11483 not found: ID does not exist" Nov 27 17:32:04 crc kubenswrapper[4792]: I1127 17:32:04.864583 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9bc826f-d192-4613-a37b-952cadcefbb7-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.018698 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.037224 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.050043 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 27 17:32:05 crc kubenswrapper[4792]: E1127 17:32:05.050477 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bc826f-d192-4613-a37b-952cadcefbb7" containerName="cinder-api" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.050495 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bc826f-d192-4613-a37b-952cadcefbb7" containerName="cinder-api" Nov 27 17:32:05 crc kubenswrapper[4792]: E1127 17:32:05.050541 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9bc826f-d192-4613-a37b-952cadcefbb7" containerName="cinder-api-log" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.050547 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9bc826f-d192-4613-a37b-952cadcefbb7" containerName="cinder-api-log" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.050752 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9bc826f-d192-4613-a37b-952cadcefbb7" containerName="cinder-api" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.050775 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9bc826f-d192-4613-a37b-952cadcefbb7" containerName="cinder-api-log" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.052041 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.055833 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.060163 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.062325 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.062699 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.070278 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-65f9f97c5d-544l8" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.177783 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6bf4b697fd-b9sd9"] Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.178182 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.178255 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbc2fff7-567e-4a6d-918a-7f6f430486c1-logs\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.178296 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6bf4b697fd-b9sd9" podUID="13220316-7055-49fc-9b5b-747155332282" containerName="barbican-api-log" containerID="cri-o://504af4e2e96f24dfdc95937394da32df2753de9c7f77c8304925934911c42b15" gracePeriod=30 Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.178306 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-config-data\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.178372 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bbc2fff7-567e-4a6d-918a-7f6f430486c1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.178390 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.178498 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-config-data-custom\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.178517 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-scripts\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.178577 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.178695 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.178719 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcgml\" (UniqueName: \"kubernetes.io/projected/bbc2fff7-567e-4a6d-918a-7f6f430486c1-kube-api-access-mcgml\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.178784 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6bf4b697fd-b9sd9" podUID="13220316-7055-49fc-9b5b-747155332282" containerName="barbican-api" containerID="cri-o://2455a6f689166ee43f49fdb87d34ae25094bf9bd53c1809dd5c56d426bfa4325" gracePeriod=30 Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.188191 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.295915 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.295973 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcgml\" (UniqueName: \"kubernetes.io/projected/bbc2fff7-567e-4a6d-918a-7f6f430486c1-kube-api-access-mcgml\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.296016 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbc2fff7-567e-4a6d-918a-7f6f430486c1-logs\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.296062 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-config-data\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.296155 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bbc2fff7-567e-4a6d-918a-7f6f430486c1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.296179 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.296277 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-config-data-custom\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.296326 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-scripts\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.296701 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.296823 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bbc2fff7-567e-4a6d-918a-7f6f430486c1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.299288 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbc2fff7-567e-4a6d-918a-7f6f430486c1-logs\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.301808 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-scripts\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.303161 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-config-data\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.303360 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.306792 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.306853 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-config-data-custom\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.330458 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc2fff7-567e-4a6d-918a-7f6f430486c1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.334446 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcgml\" (UniqueName: \"kubernetes.io/projected/bbc2fff7-567e-4a6d-918a-7f6f430486c1-kube-api-access-mcgml\") pod \"cinder-api-0\" (UID: \"bbc2fff7-567e-4a6d-918a-7f6f430486c1\") " pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.374349 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.713483 4792 generic.go:334] "Generic (PLEG): container finished" podID="13220316-7055-49fc-9b5b-747155332282" containerID="504af4e2e96f24dfdc95937394da32df2753de9c7f77c8304925934911c42b15" exitCode=143 Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.713565 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bf4b697fd-b9sd9" event={"ID":"13220316-7055-49fc-9b5b-747155332282","Type":"ContainerDied","Data":"504af4e2e96f24dfdc95937394da32df2753de9c7f77c8304925934911c42b15"} Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.723168 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"385b9d5c-07c2-40b5-99c2-6176a9611572","Type":"ContainerStarted","Data":"42c704040b2e7114532a3840fab2c929298f43ca59facf7569112198b3ba3355"} Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.723205 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"385b9d5c-07c2-40b5-99c2-6176a9611572","Type":"ContainerStarted","Data":"9b747fe5686d277f3530c0f5bc248d5e907378aec673bff5cb5400dfface8180"} Nov 27 17:32:05 crc kubenswrapper[4792]: W1127 17:32:05.849613 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbc2fff7_567e_4a6d_918a_7f6f430486c1.slice/crio-35eac01c93d95f808fa42c891bd1fa45d77191814b5ab21a5986026385f6fad9 WatchSource:0}: Error finding container 35eac01c93d95f808fa42c891bd1fa45d77191814b5ab21a5986026385f6fad9: Status 404 returned error can't find the container with id 35eac01c93d95f808fa42c891bd1fa45d77191814b5ab21a5986026385f6fad9 Nov 27 17:32:05 crc kubenswrapper[4792]: I1127 17:32:05.850806 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 27 17:32:06 crc kubenswrapper[4792]: I1127 17:32:06.701562 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9bc826f-d192-4613-a37b-952cadcefbb7" path="/var/lib/kubelet/pods/a9bc826f-d192-4613-a37b-952cadcefbb7/volumes" Nov 27 17:32:06 crc kubenswrapper[4792]: I1127 17:32:06.742480 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bbc2fff7-567e-4a6d-918a-7f6f430486c1","Type":"ContainerStarted","Data":"f6fa445177b792b811b236d1201cc3a48f186247eb3bd1fe261f9fcceecf5300"} Nov 27 17:32:06 crc kubenswrapper[4792]: I1127 17:32:06.742541 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bbc2fff7-567e-4a6d-918a-7f6f430486c1","Type":"ContainerStarted","Data":"35eac01c93d95f808fa42c891bd1fa45d77191814b5ab21a5986026385f6fad9"} Nov 27 17:32:06 crc kubenswrapper[4792]: I1127 17:32:06.788932 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5fff8f565-9t8rn" Nov 27 17:32:06 crc kubenswrapper[4792]: I1127 17:32:06.856922 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-774c69c88b-662kt"] Nov 27 17:32:06 crc kubenswrapper[4792]: I1127 17:32:06.857157 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-774c69c88b-662kt" podUID="399bbc40-3013-43ce-9de7-72105e209540" containerName="neutron-api" containerID="cri-o://2d68ce3656992f54d0bfd29f581151cbf7a30398c7933a218495c742bd89cf1c" gracePeriod=30 Nov 27 17:32:06 crc kubenswrapper[4792]: I1127 17:32:06.857704 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-774c69c88b-662kt" podUID="399bbc40-3013-43ce-9de7-72105e209540" containerName="neutron-httpd" containerID="cri-o://6e96d4e724162e8035294801e76346f4784eb5824586843ff67885d658e71b68" gracePeriod=30 Nov 27 17:32:07 crc kubenswrapper[4792]: I1127 17:32:07.767762 4792 generic.go:334] "Generic (PLEG): container finished" podID="399bbc40-3013-43ce-9de7-72105e209540" containerID="6e96d4e724162e8035294801e76346f4784eb5824586843ff67885d658e71b68" exitCode=0 Nov 27 17:32:07 crc kubenswrapper[4792]: I1127 17:32:07.767832 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-774c69c88b-662kt" event={"ID":"399bbc40-3013-43ce-9de7-72105e209540","Type":"ContainerDied","Data":"6e96d4e724162e8035294801e76346f4784eb5824586843ff67885d658e71b68"} Nov 27 17:32:07 crc kubenswrapper[4792]: I1127 17:32:07.772233 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bbc2fff7-567e-4a6d-918a-7f6f430486c1","Type":"ContainerStarted","Data":"d0dbc47f5736fdbab062049c80cdee285df01d81710ac6eb8f2f63e6f9a2ba6a"} Nov 27 17:32:07 crc kubenswrapper[4792]: I1127 17:32:07.772395 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 27 17:32:07 crc kubenswrapper[4792]: I1127 17:32:07.807878 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.807860983 podStartE2EDuration="2.807860983s" podCreationTimestamp="2025-11-27 17:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:32:07.798994882 +0000 UTC m=+1350.141821200" watchObservedRunningTime="2025-11-27 17:32:07.807860983 +0000 UTC m=+1350.150687301" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.342463 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bf4b697fd-b9sd9" podUID="13220316-7055-49fc-9b5b-747155332282" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.197:9311/healthcheck\": read tcp 10.217.0.2:42722->10.217.0.197:9311: read: connection reset by peer" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.342553 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bf4b697fd-b9sd9" podUID="13220316-7055-49fc-9b5b-747155332282" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.197:9311/healthcheck\": read tcp 10.217.0.2:42726->10.217.0.197:9311: read: connection reset by peer" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.343316 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bf4b697fd-b9sd9" podUID="13220316-7055-49fc-9b5b-747155332282" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.197:9311/healthcheck\": dial tcp 10.217.0.197:9311: connect: connection refused" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.657972 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.776899 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st77w\" (UniqueName: \"kubernetes.io/projected/399bbc40-3013-43ce-9de7-72105e209540-kube-api-access-st77w\") pod \"399bbc40-3013-43ce-9de7-72105e209540\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.776978 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-combined-ca-bundle\") pod \"399bbc40-3013-43ce-9de7-72105e209540\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.777016 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-httpd-config\") pod \"399bbc40-3013-43ce-9de7-72105e209540\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.777148 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-ovndb-tls-certs\") pod \"399bbc40-3013-43ce-9de7-72105e209540\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.777236 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-config\") pod \"399bbc40-3013-43ce-9de7-72105e209540\" (UID: \"399bbc40-3013-43ce-9de7-72105e209540\") " Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.784342 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "399bbc40-3013-43ce-9de7-72105e209540" (UID: "399bbc40-3013-43ce-9de7-72105e209540"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.788132 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/399bbc40-3013-43ce-9de7-72105e209540-kube-api-access-st77w" (OuterVolumeSpecName: "kube-api-access-st77w") pod "399bbc40-3013-43ce-9de7-72105e209540" (UID: "399bbc40-3013-43ce-9de7-72105e209540"). InnerVolumeSpecName "kube-api-access-st77w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.790727 4792 generic.go:334] "Generic (PLEG): container finished" podID="399bbc40-3013-43ce-9de7-72105e209540" containerID="2d68ce3656992f54d0bfd29f581151cbf7a30398c7933a218495c742bd89cf1c" exitCode=0 Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.790827 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-774c69c88b-662kt" event={"ID":"399bbc40-3013-43ce-9de7-72105e209540","Type":"ContainerDied","Data":"2d68ce3656992f54d0bfd29f581151cbf7a30398c7933a218495c742bd89cf1c"} Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.790868 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-774c69c88b-662kt" event={"ID":"399bbc40-3013-43ce-9de7-72105e209540","Type":"ContainerDied","Data":"0dd67ad510e113695369a3a582936b17343f84fa72dbb787bc23f4f90dddff4b"} Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.790894 4792 scope.go:117] "RemoveContainer" containerID="6e96d4e724162e8035294801e76346f4784eb5824586843ff67885d658e71b68" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.791099 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-774c69c88b-662kt" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.800535 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"385b9d5c-07c2-40b5-99c2-6176a9611572","Type":"ContainerStarted","Data":"4b16df6b903465b19f4b89aa1b93a684d3fd2ac91776d1d89e414d3ac08671e4"} Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.801922 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.807076 4792 generic.go:334] "Generic (PLEG): container finished" podID="13220316-7055-49fc-9b5b-747155332282" containerID="2455a6f689166ee43f49fdb87d34ae25094bf9bd53c1809dd5c56d426bfa4325" exitCode=0 Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.808093 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bf4b697fd-b9sd9" event={"ID":"13220316-7055-49fc-9b5b-747155332282","Type":"ContainerDied","Data":"2455a6f689166ee43f49fdb87d34ae25094bf9bd53c1809dd5c56d426bfa4325"} Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.808126 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bf4b697fd-b9sd9" event={"ID":"13220316-7055-49fc-9b5b-747155332282","Type":"ContainerDied","Data":"b6805285adcea5faf045cb00a5c359f2054b03a8bc1e9946e0f9e2c1133b8f43"} Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.808138 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6805285adcea5faf045cb00a5c359f2054b03a8bc1e9946e0f9e2c1133b8f43" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.833818 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.173152621 podStartE2EDuration="7.833796434s" podCreationTimestamp="2025-11-27 17:32:01 +0000 UTC" firstStartedPulling="2025-11-27 17:32:02.818818955 +0000 UTC m=+1345.161645273" lastFinishedPulling="2025-11-27 17:32:07.479462778 +0000 UTC m=+1349.822289086" observedRunningTime="2025-11-27 17:32:08.828863731 +0000 UTC m=+1351.171690039" watchObservedRunningTime="2025-11-27 17:32:08.833796434 +0000 UTC m=+1351.176622752" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.845906 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-config" (OuterVolumeSpecName: "config") pod "399bbc40-3013-43ce-9de7-72105e209540" (UID: "399bbc40-3013-43ce-9de7-72105e209540"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.857089 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "399bbc40-3013-43ce-9de7-72105e209540" (UID: "399bbc40-3013-43ce-9de7-72105e209540"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.881825 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.881943 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st77w\" (UniqueName: \"kubernetes.io/projected/399bbc40-3013-43ce-9de7-72105e209540-kube-api-access-st77w\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.881957 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.881967 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.900285 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "399bbc40-3013-43ce-9de7-72105e209540" (UID: "399bbc40-3013-43ce-9de7-72105e209540"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.971284 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.981005 4792 scope.go:117] "RemoveContainer" containerID="2d68ce3656992f54d0bfd29f581151cbf7a30398c7933a218495c742bd89cf1c" Nov 27 17:32:08 crc kubenswrapper[4792]: I1127 17:32:08.984786 4792 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/399bbc40-3013-43ce-9de7-72105e209540-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.036926 4792 scope.go:117] "RemoveContainer" containerID="6e96d4e724162e8035294801e76346f4784eb5824586843ff67885d658e71b68" Nov 27 17:32:09 crc kubenswrapper[4792]: E1127 17:32:09.039853 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e96d4e724162e8035294801e76346f4784eb5824586843ff67885d658e71b68\": container with ID starting with 6e96d4e724162e8035294801e76346f4784eb5824586843ff67885d658e71b68 not found: ID does not exist" containerID="6e96d4e724162e8035294801e76346f4784eb5824586843ff67885d658e71b68" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.039894 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e96d4e724162e8035294801e76346f4784eb5824586843ff67885d658e71b68"} err="failed to get container status \"6e96d4e724162e8035294801e76346f4784eb5824586843ff67885d658e71b68\": rpc error: code = NotFound desc = could not find container \"6e96d4e724162e8035294801e76346f4784eb5824586843ff67885d658e71b68\": container with ID starting with 6e96d4e724162e8035294801e76346f4784eb5824586843ff67885d658e71b68 not found: ID does not exist" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.039923 4792 scope.go:117] "RemoveContainer" containerID="2d68ce3656992f54d0bfd29f581151cbf7a30398c7933a218495c742bd89cf1c" Nov 27 17:32:09 crc kubenswrapper[4792]: E1127 17:32:09.041831 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d68ce3656992f54d0bfd29f581151cbf7a30398c7933a218495c742bd89cf1c\": container with ID starting with 2d68ce3656992f54d0bfd29f581151cbf7a30398c7933a218495c742bd89cf1c not found: ID does not exist" containerID="2d68ce3656992f54d0bfd29f581151cbf7a30398c7933a218495c742bd89cf1c" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.041905 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d68ce3656992f54d0bfd29f581151cbf7a30398c7933a218495c742bd89cf1c"} err="failed to get container status \"2d68ce3656992f54d0bfd29f581151cbf7a30398c7933a218495c742bd89cf1c\": rpc error: code = NotFound desc = could not find container \"2d68ce3656992f54d0bfd29f581151cbf7a30398c7933a218495c742bd89cf1c\": container with ID starting with 2d68ce3656992f54d0bfd29f581151cbf7a30398c7933a218495c742bd89cf1c not found: ID does not exist" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.085551 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-config-data\") pod \"13220316-7055-49fc-9b5b-747155332282\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.085700 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-combined-ca-bundle\") pod \"13220316-7055-49fc-9b5b-747155332282\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.085779 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-config-data-custom\") pod \"13220316-7055-49fc-9b5b-747155332282\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.085901 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrc9q\" (UniqueName: \"kubernetes.io/projected/13220316-7055-49fc-9b5b-747155332282-kube-api-access-mrc9q\") pod \"13220316-7055-49fc-9b5b-747155332282\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.085964 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13220316-7055-49fc-9b5b-747155332282-logs\") pod \"13220316-7055-49fc-9b5b-747155332282\" (UID: \"13220316-7055-49fc-9b5b-747155332282\") " Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.086875 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13220316-7055-49fc-9b5b-747155332282-logs" (OuterVolumeSpecName: "logs") pod "13220316-7055-49fc-9b5b-747155332282" (UID: "13220316-7055-49fc-9b5b-747155332282"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.089378 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "13220316-7055-49fc-9b5b-747155332282" (UID: "13220316-7055-49fc-9b5b-747155332282"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.090110 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13220316-7055-49fc-9b5b-747155332282-kube-api-access-mrc9q" (OuterVolumeSpecName: "kube-api-access-mrc9q") pod "13220316-7055-49fc-9b5b-747155332282" (UID: "13220316-7055-49fc-9b5b-747155332282"). InnerVolumeSpecName "kube-api-access-mrc9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.113838 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13220316-7055-49fc-9b5b-747155332282" (UID: "13220316-7055-49fc-9b5b-747155332282"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.147233 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-774c69c88b-662kt"] Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.151625 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-config-data" (OuterVolumeSpecName: "config-data") pod "13220316-7055-49fc-9b5b-747155332282" (UID: "13220316-7055-49fc-9b5b-747155332282"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.159248 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-774c69c88b-662kt"] Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.162808 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.190253 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.190303 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.190315 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13220316-7055-49fc-9b5b-747155332282-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.190323 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrc9q\" (UniqueName: \"kubernetes.io/projected/13220316-7055-49fc-9b5b-747155332282-kube-api-access-mrc9q\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.190469 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13220316-7055-49fc-9b5b-747155332282-logs\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.250077 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-gnpw8"] Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.250347 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" podUID="ee822979-d609-4c65-a7e6-290c9da32f04" containerName="dnsmasq-dns" containerID="cri-o://255688fa1f5402aa3419042e7d5df1f5254169b186dc9a0b605b63524ed8e43a" gracePeriod=10 Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.294138 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.349283 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.779122 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.827060 4792 generic.go:334] "Generic (PLEG): container finished" podID="ee822979-d609-4c65-a7e6-290c9da32f04" containerID="255688fa1f5402aa3419042e7d5df1f5254169b186dc9a0b605b63524ed8e43a" exitCode=0 Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.827141 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.827140 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" event={"ID":"ee822979-d609-4c65-a7e6-290c9da32f04","Type":"ContainerDied","Data":"255688fa1f5402aa3419042e7d5df1f5254169b186dc9a0b605b63524ed8e43a"} Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.827423 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-gnpw8" event={"ID":"ee822979-d609-4c65-a7e6-290c9da32f04","Type":"ContainerDied","Data":"5617b64d1eb5bdac8588bdd6fa05511c0db866fbcf18eb6a3adea3d786cba896"} Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.827443 4792 scope.go:117] "RemoveContainer" containerID="255688fa1f5402aa3419042e7d5df1f5254169b186dc9a0b605b63524ed8e43a" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.828738 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bf4b697fd-b9sd9" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.829532 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="dff0fe27-95c6-4685-8ac5-dd07fe300f3a" containerName="cinder-scheduler" containerID="cri-o://c78cc8c4f3d83e9dd89ae0f85aaba155ff6bd6c72120535b7af4a6b24a23de92" gracePeriod=30 Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.829640 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="dff0fe27-95c6-4685-8ac5-dd07fe300f3a" containerName="probe" containerID="cri-o://322b92ecf7a654357f82f509d317fba9b2f6f031b0de155b701cb1b048735fc6" gracePeriod=30 Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.866478 4792 scope.go:117] "RemoveContainer" containerID="47374bdca1901a6cade070c39d52aa5aca8da9eb7da303cfcbc2da5b176843f1" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.875124 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6bf4b697fd-b9sd9"] Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.887211 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6bf4b697fd-b9sd9"] Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.899978 4792 scope.go:117] "RemoveContainer" containerID="255688fa1f5402aa3419042e7d5df1f5254169b186dc9a0b605b63524ed8e43a" Nov 27 17:32:09 crc kubenswrapper[4792]: E1127 17:32:09.901825 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"255688fa1f5402aa3419042e7d5df1f5254169b186dc9a0b605b63524ed8e43a\": container with ID starting with 255688fa1f5402aa3419042e7d5df1f5254169b186dc9a0b605b63524ed8e43a not found: ID does not exist" containerID="255688fa1f5402aa3419042e7d5df1f5254169b186dc9a0b605b63524ed8e43a" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.901881 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"255688fa1f5402aa3419042e7d5df1f5254169b186dc9a0b605b63524ed8e43a"} err="failed to get container status \"255688fa1f5402aa3419042e7d5df1f5254169b186dc9a0b605b63524ed8e43a\": rpc error: code = NotFound desc = could not find container \"255688fa1f5402aa3419042e7d5df1f5254169b186dc9a0b605b63524ed8e43a\": container with ID starting with 255688fa1f5402aa3419042e7d5df1f5254169b186dc9a0b605b63524ed8e43a not found: ID does not exist" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.901917 4792 scope.go:117] "RemoveContainer" containerID="47374bdca1901a6cade070c39d52aa5aca8da9eb7da303cfcbc2da5b176843f1" Nov 27 17:32:09 crc kubenswrapper[4792]: E1127 17:32:09.902367 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47374bdca1901a6cade070c39d52aa5aca8da9eb7da303cfcbc2da5b176843f1\": container with ID starting with 47374bdca1901a6cade070c39d52aa5aca8da9eb7da303cfcbc2da5b176843f1 not found: ID does not exist" containerID="47374bdca1901a6cade070c39d52aa5aca8da9eb7da303cfcbc2da5b176843f1" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.902422 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47374bdca1901a6cade070c39d52aa5aca8da9eb7da303cfcbc2da5b176843f1"} err="failed to get container status \"47374bdca1901a6cade070c39d52aa5aca8da9eb7da303cfcbc2da5b176843f1\": rpc error: code = NotFound desc = could not find container \"47374bdca1901a6cade070c39d52aa5aca8da9eb7da303cfcbc2da5b176843f1\": container with ID starting with 47374bdca1901a6cade070c39d52aa5aca8da9eb7da303cfcbc2da5b176843f1 not found: ID does not exist" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.914785 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-ovsdbserver-sb\") pod \"ee822979-d609-4c65-a7e6-290c9da32f04\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.914869 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-ovsdbserver-nb\") pod \"ee822979-d609-4c65-a7e6-290c9da32f04\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.914916 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-config\") pod \"ee822979-d609-4c65-a7e6-290c9da32f04\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.915072 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-dns-svc\") pod \"ee822979-d609-4c65-a7e6-290c9da32f04\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.915154 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-dns-swift-storage-0\") pod \"ee822979-d609-4c65-a7e6-290c9da32f04\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.915182 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m772t\" (UniqueName: \"kubernetes.io/projected/ee822979-d609-4c65-a7e6-290c9da32f04-kube-api-access-m772t\") pod \"ee822979-d609-4c65-a7e6-290c9da32f04\" (UID: \"ee822979-d609-4c65-a7e6-290c9da32f04\") " Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.931735 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee822979-d609-4c65-a7e6-290c9da32f04-kube-api-access-m772t" (OuterVolumeSpecName: "kube-api-access-m772t") pod "ee822979-d609-4c65-a7e6-290c9da32f04" (UID: "ee822979-d609-4c65-a7e6-290c9da32f04"). InnerVolumeSpecName "kube-api-access-m772t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.993029 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ee822979-d609-4c65-a7e6-290c9da32f04" (UID: "ee822979-d609-4c65-a7e6-290c9da32f04"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.994141 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ee822979-d609-4c65-a7e6-290c9da32f04" (UID: "ee822979-d609-4c65-a7e6-290c9da32f04"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:32:09 crc kubenswrapper[4792]: I1127 17:32:09.995778 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee822979-d609-4c65-a7e6-290c9da32f04" (UID: "ee822979-d609-4c65-a7e6-290c9da32f04"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:32:10 crc kubenswrapper[4792]: I1127 17:32:10.001068 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-config" (OuterVolumeSpecName: "config") pod "ee822979-d609-4c65-a7e6-290c9da32f04" (UID: "ee822979-d609-4c65-a7e6-290c9da32f04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:32:10 crc kubenswrapper[4792]: I1127 17:32:10.001366 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ee822979-d609-4c65-a7e6-290c9da32f04" (UID: "ee822979-d609-4c65-a7e6-290c9da32f04"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:32:10 crc kubenswrapper[4792]: I1127 17:32:10.018317 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:10 crc kubenswrapper[4792]: I1127 17:32:10.018384 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:10 crc kubenswrapper[4792]: I1127 17:32:10.018488 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:10 crc kubenswrapper[4792]: I1127 17:32:10.018498 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:10 crc kubenswrapper[4792]: I1127 17:32:10.018507 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m772t\" (UniqueName: \"kubernetes.io/projected/ee822979-d609-4c65-a7e6-290c9da32f04-kube-api-access-m772t\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:10 crc kubenswrapper[4792]: I1127 17:32:10.018516 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee822979-d609-4c65-a7e6-290c9da32f04-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:10 crc kubenswrapper[4792]: I1127 17:32:10.181370 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-gnpw8"] Nov 27 17:32:10 crc kubenswrapper[4792]: I1127 17:32:10.191519 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-gnpw8"] Nov 27 17:32:10 crc kubenswrapper[4792]: I1127 17:32:10.710940 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13220316-7055-49fc-9b5b-747155332282" path="/var/lib/kubelet/pods/13220316-7055-49fc-9b5b-747155332282/volumes" Nov 27 17:32:10 crc kubenswrapper[4792]: I1127 17:32:10.712471 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="399bbc40-3013-43ce-9de7-72105e209540" path="/var/lib/kubelet/pods/399bbc40-3013-43ce-9de7-72105e209540/volumes" Nov 27 17:32:10 crc kubenswrapper[4792]: I1127 17:32:10.713796 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee822979-d609-4c65-a7e6-290c9da32f04" path="/var/lib/kubelet/pods/ee822979-d609-4c65-a7e6-290c9da32f04/volumes" Nov 27 17:32:10 crc kubenswrapper[4792]: I1127 17:32:10.844727 4792 generic.go:334] "Generic (PLEG): container finished" podID="dff0fe27-95c6-4685-8ac5-dd07fe300f3a" containerID="322b92ecf7a654357f82f509d317fba9b2f6f031b0de155b701cb1b048735fc6" exitCode=0 Nov 27 17:32:10 crc kubenswrapper[4792]: I1127 17:32:10.844810 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dff0fe27-95c6-4685-8ac5-dd07fe300f3a","Type":"ContainerDied","Data":"322b92ecf7a654357f82f509d317fba9b2f6f031b0de155b701cb1b048735fc6"} Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.414558 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.581543 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-etc-machine-id\") pod \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.581663 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "dff0fe27-95c6-4685-8ac5-dd07fe300f3a" (UID: "dff0fe27-95c6-4685-8ac5-dd07fe300f3a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.581699 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-config-data\") pod \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.581730 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-config-data-custom\") pod \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.581840 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5c6d\" (UniqueName: \"kubernetes.io/projected/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-kube-api-access-x5c6d\") pod \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.581876 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-scripts\") pod \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.581941 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-combined-ca-bundle\") pod \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\" (UID: \"dff0fe27-95c6-4685-8ac5-dd07fe300f3a\") " Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.582504 4792 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.587963 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dff0fe27-95c6-4685-8ac5-dd07fe300f3a" (UID: "dff0fe27-95c6-4685-8ac5-dd07fe300f3a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.590216 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-kube-api-access-x5c6d" (OuterVolumeSpecName: "kube-api-access-x5c6d") pod "dff0fe27-95c6-4685-8ac5-dd07fe300f3a" (UID: "dff0fe27-95c6-4685-8ac5-dd07fe300f3a"). InnerVolumeSpecName "kube-api-access-x5c6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.598089 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-scripts" (OuterVolumeSpecName: "scripts") pod "dff0fe27-95c6-4685-8ac5-dd07fe300f3a" (UID: "dff0fe27-95c6-4685-8ac5-dd07fe300f3a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.684258 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.684286 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5c6d\" (UniqueName: \"kubernetes.io/projected/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-kube-api-access-x5c6d\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.684297 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.684580 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dff0fe27-95c6-4685-8ac5-dd07fe300f3a" (UID: "dff0fe27-95c6-4685-8ac5-dd07fe300f3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.729269 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-config-data" (OuterVolumeSpecName: "config-data") pod "dff0fe27-95c6-4685-8ac5-dd07fe300f3a" (UID: "dff0fe27-95c6-4685-8ac5-dd07fe300f3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.787013 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.787045 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff0fe27-95c6-4685-8ac5-dd07fe300f3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.882149 4792 generic.go:334] "Generic (PLEG): container finished" podID="dff0fe27-95c6-4685-8ac5-dd07fe300f3a" containerID="c78cc8c4f3d83e9dd89ae0f85aaba155ff6bd6c72120535b7af4a6b24a23de92" exitCode=0 Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.882197 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dff0fe27-95c6-4685-8ac5-dd07fe300f3a","Type":"ContainerDied","Data":"c78cc8c4f3d83e9dd89ae0f85aaba155ff6bd6c72120535b7af4a6b24a23de92"} Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.882243 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"dff0fe27-95c6-4685-8ac5-dd07fe300f3a","Type":"ContainerDied","Data":"3602f142c2dab023f5038035c264848459e7685836ee19bd507887165fcdd1ab"} Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.882264 4792 scope.go:117] "RemoveContainer" containerID="322b92ecf7a654357f82f509d317fba9b2f6f031b0de155b701cb1b048735fc6" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.882477 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.918773 4792 scope.go:117] "RemoveContainer" containerID="c78cc8c4f3d83e9dd89ae0f85aaba155ff6bd6c72120535b7af4a6b24a23de92" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.925006 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.944374 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.969723 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 27 17:32:12 crc kubenswrapper[4792]: E1127 17:32:12.970187 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee822979-d609-4c65-a7e6-290c9da32f04" containerName="dnsmasq-dns" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.970200 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee822979-d609-4c65-a7e6-290c9da32f04" containerName="dnsmasq-dns" Nov 27 17:32:12 crc kubenswrapper[4792]: E1127 17:32:12.970232 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff0fe27-95c6-4685-8ac5-dd07fe300f3a" containerName="cinder-scheduler" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.970239 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff0fe27-95c6-4685-8ac5-dd07fe300f3a" containerName="cinder-scheduler" Nov 27 17:32:12 crc kubenswrapper[4792]: E1127 17:32:12.970247 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13220316-7055-49fc-9b5b-747155332282" containerName="barbican-api" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.970254 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="13220316-7055-49fc-9b5b-747155332282" containerName="barbican-api" Nov 27 17:32:12 crc kubenswrapper[4792]: E1127 17:32:12.970263 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee822979-d609-4c65-a7e6-290c9da32f04" containerName="init" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.970271 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee822979-d609-4c65-a7e6-290c9da32f04" containerName="init" Nov 27 17:32:12 crc kubenswrapper[4792]: E1127 17:32:12.970289 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399bbc40-3013-43ce-9de7-72105e209540" containerName="neutron-httpd" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.970294 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="399bbc40-3013-43ce-9de7-72105e209540" containerName="neutron-httpd" Nov 27 17:32:12 crc kubenswrapper[4792]: E1127 17:32:12.970308 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399bbc40-3013-43ce-9de7-72105e209540" containerName="neutron-api" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.970314 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="399bbc40-3013-43ce-9de7-72105e209540" containerName="neutron-api" Nov 27 17:32:12 crc kubenswrapper[4792]: E1127 17:32:12.970325 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff0fe27-95c6-4685-8ac5-dd07fe300f3a" containerName="probe" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.970331 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff0fe27-95c6-4685-8ac5-dd07fe300f3a" containerName="probe" Nov 27 17:32:12 crc kubenswrapper[4792]: E1127 17:32:12.970339 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13220316-7055-49fc-9b5b-747155332282" containerName="barbican-api-log" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.970344 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="13220316-7055-49fc-9b5b-747155332282" containerName="barbican-api-log" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.970572 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="399bbc40-3013-43ce-9de7-72105e209540" containerName="neutron-api" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.970588 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee822979-d609-4c65-a7e6-290c9da32f04" containerName="dnsmasq-dns" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.970603 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="399bbc40-3013-43ce-9de7-72105e209540" containerName="neutron-httpd" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.970617 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="13220316-7055-49fc-9b5b-747155332282" containerName="barbican-api-log" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.970633 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff0fe27-95c6-4685-8ac5-dd07fe300f3a" containerName="cinder-scheduler" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.970670 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="13220316-7055-49fc-9b5b-747155332282" containerName="barbican-api" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.970690 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff0fe27-95c6-4685-8ac5-dd07fe300f3a" containerName="probe" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.971910 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.976657 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.982749 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.996290 4792 scope.go:117] "RemoveContainer" containerID="322b92ecf7a654357f82f509d317fba9b2f6f031b0de155b701cb1b048735fc6" Nov 27 17:32:12 crc kubenswrapper[4792]: E1127 17:32:12.996985 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"322b92ecf7a654357f82f509d317fba9b2f6f031b0de155b701cb1b048735fc6\": container with ID starting with 322b92ecf7a654357f82f509d317fba9b2f6f031b0de155b701cb1b048735fc6 not found: ID does not exist" containerID="322b92ecf7a654357f82f509d317fba9b2f6f031b0de155b701cb1b048735fc6" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.997010 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"322b92ecf7a654357f82f509d317fba9b2f6f031b0de155b701cb1b048735fc6"} err="failed to get container status \"322b92ecf7a654357f82f509d317fba9b2f6f031b0de155b701cb1b048735fc6\": rpc error: code = NotFound desc = could not find container \"322b92ecf7a654357f82f509d317fba9b2f6f031b0de155b701cb1b048735fc6\": container with ID starting with 322b92ecf7a654357f82f509d317fba9b2f6f031b0de155b701cb1b048735fc6 not found: ID does not exist" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.997032 4792 scope.go:117] "RemoveContainer" containerID="c78cc8c4f3d83e9dd89ae0f85aaba155ff6bd6c72120535b7af4a6b24a23de92" Nov 27 17:32:12 crc kubenswrapper[4792]: E1127 17:32:12.997406 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c78cc8c4f3d83e9dd89ae0f85aaba155ff6bd6c72120535b7af4a6b24a23de92\": container with ID starting with c78cc8c4f3d83e9dd89ae0f85aaba155ff6bd6c72120535b7af4a6b24a23de92 not found: ID does not exist" containerID="c78cc8c4f3d83e9dd89ae0f85aaba155ff6bd6c72120535b7af4a6b24a23de92" Nov 27 17:32:12 crc kubenswrapper[4792]: I1127 17:32:12.997428 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c78cc8c4f3d83e9dd89ae0f85aaba155ff6bd6c72120535b7af4a6b24a23de92"} err="failed to get container status \"c78cc8c4f3d83e9dd89ae0f85aaba155ff6bd6c72120535b7af4a6b24a23de92\": rpc error: code = NotFound desc = could not find container \"c78cc8c4f3d83e9dd89ae0f85aaba155ff6bd6c72120535b7af4a6b24a23de92\": container with ID starting with c78cc8c4f3d83e9dd89ae0f85aaba155ff6bd6c72120535b7af4a6b24a23de92 not found: ID does not exist" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.094166 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/301f71fa-43fd-4005-a753-5127a2e7df97-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.094229 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/301f71fa-43fd-4005-a753-5127a2e7df97-scripts\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.094485 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/301f71fa-43fd-4005-a753-5127a2e7df97-config-data\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.094756 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbgvz\" (UniqueName: \"kubernetes.io/projected/301f71fa-43fd-4005-a753-5127a2e7df97-kube-api-access-rbgvz\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.094833 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/301f71fa-43fd-4005-a753-5127a2e7df97-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.094928 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/301f71fa-43fd-4005-a753-5127a2e7df97-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.198306 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/301f71fa-43fd-4005-a753-5127a2e7df97-config-data\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.198497 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbgvz\" (UniqueName: \"kubernetes.io/projected/301f71fa-43fd-4005-a753-5127a2e7df97-kube-api-access-rbgvz\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.198550 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/301f71fa-43fd-4005-a753-5127a2e7df97-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.198625 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/301f71fa-43fd-4005-a753-5127a2e7df97-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.198863 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/301f71fa-43fd-4005-a753-5127a2e7df97-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.198964 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/301f71fa-43fd-4005-a753-5127a2e7df97-scripts\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.201871 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/301f71fa-43fd-4005-a753-5127a2e7df97-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.204823 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/301f71fa-43fd-4005-a753-5127a2e7df97-scripts\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.208943 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/301f71fa-43fd-4005-a753-5127a2e7df97-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.215724 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/301f71fa-43fd-4005-a753-5127a2e7df97-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.226563 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/301f71fa-43fd-4005-a753-5127a2e7df97-config-data\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.226886 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbgvz\" (UniqueName: \"kubernetes.io/projected/301f71fa-43fd-4005-a753-5127a2e7df97-kube-api-access-rbgvz\") pod \"cinder-scheduler-0\" (UID: \"301f71fa-43fd-4005-a753-5127a2e7df97\") " pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.323193 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.827067 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 27 17:32:13 crc kubenswrapper[4792]: W1127 17:32:13.827519 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod301f71fa_43fd_4005_a753_5127a2e7df97.slice/crio-d260b1efe606fcf6da560e933dadb0f0f3e089fa3d2aca1169810f2229b5a425 WatchSource:0}: Error finding container d260b1efe606fcf6da560e933dadb0f0f3e089fa3d2aca1169810f2229b5a425: Status 404 returned error can't find the container with id d260b1efe606fcf6da560e933dadb0f0f3e089fa3d2aca1169810f2229b5a425 Nov 27 17:32:13 crc kubenswrapper[4792]: I1127 17:32:13.901299 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"301f71fa-43fd-4005-a753-5127a2e7df97","Type":"ContainerStarted","Data":"d260b1efe606fcf6da560e933dadb0f0f3e089fa3d2aca1169810f2229b5a425"} Nov 27 17:32:14 crc kubenswrapper[4792]: I1127 17:32:14.703785 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dff0fe27-95c6-4685-8ac5-dd07fe300f3a" path="/var/lib/kubelet/pods/dff0fe27-95c6-4685-8ac5-dd07fe300f3a/volumes" Nov 27 17:32:14 crc kubenswrapper[4792]: I1127 17:32:14.917685 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"301f71fa-43fd-4005-a753-5127a2e7df97","Type":"ContainerStarted","Data":"c0fb2a586363d07f9c34960a3b1626bbe7e3907e9a4ba608de58cc5d96556040"} Nov 27 17:32:15 crc kubenswrapper[4792]: I1127 17:32:15.922558 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:32:15 crc kubenswrapper[4792]: I1127 17:32:15.925936 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-68b5cd97dd-hfxs2" Nov 27 17:32:15 crc kubenswrapper[4792]: I1127 17:32:15.933991 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"301f71fa-43fd-4005-a753-5127a2e7df97","Type":"ContainerStarted","Data":"38b2996841a479fd7921acc677e1eea349ea0c8842da5833f736459450048229"} Nov 27 17:32:16 crc kubenswrapper[4792]: I1127 17:32:16.021241 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.021221076 podStartE2EDuration="4.021221076s" podCreationTimestamp="2025-11-27 17:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:32:16.005050743 +0000 UTC m=+1358.347877061" watchObservedRunningTime="2025-11-27 17:32:16.021221076 +0000 UTC m=+1358.364047394" Nov 27 17:32:17 crc kubenswrapper[4792]: I1127 17:32:17.537655 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 27 17:32:18 crc kubenswrapper[4792]: I1127 17:32:18.323308 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 27 17:32:19 crc kubenswrapper[4792]: I1127 17:32:19.225766 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6667648786-v844v" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.269964 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.272013 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.276310 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-klt6r" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.276396 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.279706 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.285752 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.440891 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b3bbd7d-9560-4b5a-8614-61da62910202-openstack-config\") pod \"openstackclient\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.440933 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b3bbd7d-9560-4b5a-8614-61da62910202-openstack-config-secret\") pod \"openstackclient\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.441584 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3bbd7d-9560-4b5a-8614-61da62910202-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.441818 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbk8b\" (UniqueName: \"kubernetes.io/projected/0b3bbd7d-9560-4b5a-8614-61da62910202-kube-api-access-fbk8b\") pod \"openstackclient\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.543399 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3bbd7d-9560-4b5a-8614-61da62910202-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.543458 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbk8b\" (UniqueName: \"kubernetes.io/projected/0b3bbd7d-9560-4b5a-8614-61da62910202-kube-api-access-fbk8b\") pod \"openstackclient\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.543743 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b3bbd7d-9560-4b5a-8614-61da62910202-openstack-config\") pod \"openstackclient\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.543777 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b3bbd7d-9560-4b5a-8614-61da62910202-openstack-config-secret\") pod \"openstackclient\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.544818 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b3bbd7d-9560-4b5a-8614-61da62910202-openstack-config\") pod \"openstackclient\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.549431 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3bbd7d-9560-4b5a-8614-61da62910202-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.551086 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b3bbd7d-9560-4b5a-8614-61da62910202-openstack-config-secret\") pod \"openstackclient\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.561098 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbk8b\" (UniqueName: \"kubernetes.io/projected/0b3bbd7d-9560-4b5a-8614-61da62910202-kube-api-access-fbk8b\") pod \"openstackclient\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.606586 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.623970 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.633954 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.657476 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.659529 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.720716 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 27 17:32:22 crc kubenswrapper[4792]: E1127 17:32:22.772088 4792 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 27 17:32:22 crc kubenswrapper[4792]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_0b3bbd7d-9560-4b5a-8614-61da62910202_0(7eb7fe8aa2766aeb8dcab74ed11b0257ec3d82bae10a11e4cbcef274f3d7491f): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7eb7fe8aa2766aeb8dcab74ed11b0257ec3d82bae10a11e4cbcef274f3d7491f" Netns:"/var/run/netns/a0dfba00-3d68-4b25-b54f-052a65d7e792" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=7eb7fe8aa2766aeb8dcab74ed11b0257ec3d82bae10a11e4cbcef274f3d7491f;K8S_POD_UID=0b3bbd7d-9560-4b5a-8614-61da62910202" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/0b3bbd7d-9560-4b5a-8614-61da62910202]: expected pod UID "0b3bbd7d-9560-4b5a-8614-61da62910202" but got "e88dd573-027f-458e-81ed-c133e141afb6" from Kube API Nov 27 17:32:22 crc kubenswrapper[4792]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 27 17:32:22 crc kubenswrapper[4792]: > Nov 27 17:32:22 crc kubenswrapper[4792]: E1127 17:32:22.772449 4792 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 27 17:32:22 crc kubenswrapper[4792]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_0b3bbd7d-9560-4b5a-8614-61da62910202_0(7eb7fe8aa2766aeb8dcab74ed11b0257ec3d82bae10a11e4cbcef274f3d7491f): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7eb7fe8aa2766aeb8dcab74ed11b0257ec3d82bae10a11e4cbcef274f3d7491f" Netns:"/var/run/netns/a0dfba00-3d68-4b25-b54f-052a65d7e792" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=7eb7fe8aa2766aeb8dcab74ed11b0257ec3d82bae10a11e4cbcef274f3d7491f;K8S_POD_UID=0b3bbd7d-9560-4b5a-8614-61da62910202" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/0b3bbd7d-9560-4b5a-8614-61da62910202]: expected pod UID "0b3bbd7d-9560-4b5a-8614-61da62910202" but got "e88dd573-027f-458e-81ed-c133e141afb6" from Kube API Nov 27 17:32:22 crc kubenswrapper[4792]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 27 17:32:22 crc kubenswrapper[4792]: > pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.854067 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x4rh\" (UniqueName: \"kubernetes.io/projected/e88dd573-027f-458e-81ed-c133e141afb6-kube-api-access-4x4rh\") pod \"openstackclient\" (UID: \"e88dd573-027f-458e-81ed-c133e141afb6\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.854154 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e88dd573-027f-458e-81ed-c133e141afb6-openstack-config\") pod \"openstackclient\" (UID: \"e88dd573-027f-458e-81ed-c133e141afb6\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.854203 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e88dd573-027f-458e-81ed-c133e141afb6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e88dd573-027f-458e-81ed-c133e141afb6\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.854417 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e88dd573-027f-458e-81ed-c133e141afb6-openstack-config-secret\") pod \"openstackclient\" (UID: \"e88dd573-027f-458e-81ed-c133e141afb6\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.956270 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e88dd573-027f-458e-81ed-c133e141afb6-openstack-config-secret\") pod \"openstackclient\" (UID: \"e88dd573-027f-458e-81ed-c133e141afb6\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.956455 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x4rh\" (UniqueName: \"kubernetes.io/projected/e88dd573-027f-458e-81ed-c133e141afb6-kube-api-access-4x4rh\") pod \"openstackclient\" (UID: \"e88dd573-027f-458e-81ed-c133e141afb6\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.956489 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e88dd573-027f-458e-81ed-c133e141afb6-openstack-config\") pod \"openstackclient\" (UID: \"e88dd573-027f-458e-81ed-c133e141afb6\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.956516 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e88dd573-027f-458e-81ed-c133e141afb6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e88dd573-027f-458e-81ed-c133e141afb6\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.957940 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e88dd573-027f-458e-81ed-c133e141afb6-openstack-config\") pod \"openstackclient\" (UID: \"e88dd573-027f-458e-81ed-c133e141afb6\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.960571 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e88dd573-027f-458e-81ed-c133e141afb6-openstack-config-secret\") pod \"openstackclient\" (UID: \"e88dd573-027f-458e-81ed-c133e141afb6\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.960900 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e88dd573-027f-458e-81ed-c133e141afb6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e88dd573-027f-458e-81ed-c133e141afb6\") " pod="openstack/openstackclient" Nov 27 17:32:22 crc kubenswrapper[4792]: I1127 17:32:22.974297 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x4rh\" (UniqueName: \"kubernetes.io/projected/e88dd573-027f-458e-81ed-c133e141afb6-kube-api-access-4x4rh\") pod \"openstackclient\" (UID: \"e88dd573-027f-458e-81ed-c133e141afb6\") " pod="openstack/openstackclient" Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.027806 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.037540 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.044133 4792 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0b3bbd7d-9560-4b5a-8614-61da62910202" podUID="e88dd573-027f-458e-81ed-c133e141afb6" Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.058940 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.166369 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b3bbd7d-9560-4b5a-8614-61da62910202-openstack-config\") pod \"0b3bbd7d-9560-4b5a-8614-61da62910202\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.166487 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b3bbd7d-9560-4b5a-8614-61da62910202-openstack-config-secret\") pod \"0b3bbd7d-9560-4b5a-8614-61da62910202\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.166523 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3bbd7d-9560-4b5a-8614-61da62910202-combined-ca-bundle\") pod \"0b3bbd7d-9560-4b5a-8614-61da62910202\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.166594 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbk8b\" (UniqueName: \"kubernetes.io/projected/0b3bbd7d-9560-4b5a-8614-61da62910202-kube-api-access-fbk8b\") pod \"0b3bbd7d-9560-4b5a-8614-61da62910202\" (UID: \"0b3bbd7d-9560-4b5a-8614-61da62910202\") " Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.168957 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b3bbd7d-9560-4b5a-8614-61da62910202-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0b3bbd7d-9560-4b5a-8614-61da62910202" (UID: "0b3bbd7d-9560-4b5a-8614-61da62910202"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.173567 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3bbd7d-9560-4b5a-8614-61da62910202-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0b3bbd7d-9560-4b5a-8614-61da62910202" (UID: "0b3bbd7d-9560-4b5a-8614-61da62910202"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.173596 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3bbd7d-9560-4b5a-8614-61da62910202-kube-api-access-fbk8b" (OuterVolumeSpecName: "kube-api-access-fbk8b") pod "0b3bbd7d-9560-4b5a-8614-61da62910202" (UID: "0b3bbd7d-9560-4b5a-8614-61da62910202"). InnerVolumeSpecName "kube-api-access-fbk8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.175858 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3bbd7d-9560-4b5a-8614-61da62910202-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b3bbd7d-9560-4b5a-8614-61da62910202" (UID: "0b3bbd7d-9560-4b5a-8614-61da62910202"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.269520 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b3bbd7d-9560-4b5a-8614-61da62910202-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.269564 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b3bbd7d-9560-4b5a-8614-61da62910202-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.269578 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3bbd7d-9560-4b5a-8614-61da62910202-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.269588 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbk8b\" (UniqueName: \"kubernetes.io/projected/0b3bbd7d-9560-4b5a-8614-61da62910202-kube-api-access-fbk8b\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.527658 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 27 17:32:23 crc kubenswrapper[4792]: W1127 17:32:23.534951 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode88dd573_027f_458e_81ed_c133e141afb6.slice/crio-bf939bf6a5ed2dfd7454426572fcf0fdfe805918b3bb2090ed42cc211e91a20b WatchSource:0}: Error finding container bf939bf6a5ed2dfd7454426572fcf0fdfe805918b3bb2090ed42cc211e91a20b: Status 404 returned error can't find the container with id bf939bf6a5ed2dfd7454426572fcf0fdfe805918b3bb2090ed42cc211e91a20b Nov 27 17:32:23 crc kubenswrapper[4792]: I1127 17:32:23.566916 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 27 17:32:24 crc kubenswrapper[4792]: I1127 17:32:24.039515 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 27 17:32:24 crc kubenswrapper[4792]: I1127 17:32:24.039913 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e88dd573-027f-458e-81ed-c133e141afb6","Type":"ContainerStarted","Data":"bf939bf6a5ed2dfd7454426572fcf0fdfe805918b3bb2090ed42cc211e91a20b"} Nov 27 17:32:24 crc kubenswrapper[4792]: I1127 17:32:24.058332 4792 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="0b3bbd7d-9560-4b5a-8614-61da62910202" podUID="e88dd573-027f-458e-81ed-c133e141afb6" Nov 27 17:32:24 crc kubenswrapper[4792]: I1127 17:32:24.700258 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b3bbd7d-9560-4b5a-8614-61da62910202" path="/var/lib/kubelet/pods/0b3bbd7d-9560-4b5a-8614-61da62910202/volumes" Nov 27 17:32:25 crc kubenswrapper[4792]: I1127 17:32:25.999066 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:25 crc kubenswrapper[4792]: I1127 17:32:25.999471 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="sg-core" containerID="cri-o://42c704040b2e7114532a3840fab2c929298f43ca59facf7569112198b3ba3355" gracePeriod=30 Nov 27 17:32:25 crc kubenswrapper[4792]: I1127 17:32:25.999580 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="proxy-httpd" containerID="cri-o://4b16df6b903465b19f4b89aa1b93a684d3fd2ac91776d1d89e414d3ac08671e4" gracePeriod=30 Nov 27 17:32:25 crc kubenswrapper[4792]: I1127 17:32:25.999721 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="ceilometer-notification-agent" containerID="cri-o://9b747fe5686d277f3530c0f5bc248d5e907378aec673bff5cb5400dfface8180" gracePeriod=30 Nov 27 17:32:25 crc kubenswrapper[4792]: I1127 17:32:25.999404 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="ceilometer-central-agent" containerID="cri-o://ad401a8d08e41e8e46cd359acbfea5359b344bb5da8af447810b4a705854c12c" gracePeriod=30 Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.103947 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.202:3000/\": read tcp 10.217.0.2:60214->10.217.0.202:3000: read: connection reset by peer" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.525344 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-55455cb8cf-gtjxc"] Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.528131 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.530312 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.530496 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.530727 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.537187 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-55455cb8cf-gtjxc"] Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.546914 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-internal-tls-certs\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.546985 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-run-httpd\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.547015 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-public-tls-certs\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.547069 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn8zw\" (UniqueName: \"kubernetes.io/projected/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-kube-api-access-gn8zw\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.547123 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-etc-swift\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.547167 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-config-data\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.547271 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-log-httpd\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.547311 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-combined-ca-bundle\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.649159 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-log-httpd\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.649214 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-combined-ca-bundle\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.649268 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-internal-tls-certs\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.649302 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-run-httpd\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.649318 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-public-tls-certs\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.649352 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn8zw\" (UniqueName: \"kubernetes.io/projected/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-kube-api-access-gn8zw\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.649387 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-etc-swift\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.649414 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-config-data\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.650689 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-run-httpd\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.650733 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-log-httpd\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.658355 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-public-tls-certs\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.659008 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-internal-tls-certs\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.659891 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-config-data\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.659975 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-combined-ca-bundle\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.663404 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-etc-swift\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.672511 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn8zw\" (UniqueName: \"kubernetes.io/projected/9ace987a-3f62-48ce-8c4b-b9c50cd2a29e-kube-api-access-gn8zw\") pod \"swift-proxy-55455cb8cf-gtjxc\" (UID: \"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e\") " pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:26 crc kubenswrapper[4792]: I1127 17:32:26.853746 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:27 crc kubenswrapper[4792]: I1127 17:32:27.107924 4792 generic.go:334] "Generic (PLEG): container finished" podID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerID="4b16df6b903465b19f4b89aa1b93a684d3fd2ac91776d1d89e414d3ac08671e4" exitCode=0 Nov 27 17:32:27 crc kubenswrapper[4792]: I1127 17:32:27.107963 4792 generic.go:334] "Generic (PLEG): container finished" podID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerID="42c704040b2e7114532a3840fab2c929298f43ca59facf7569112198b3ba3355" exitCode=2 Nov 27 17:32:27 crc kubenswrapper[4792]: I1127 17:32:27.107974 4792 generic.go:334] "Generic (PLEG): container finished" podID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerID="ad401a8d08e41e8e46cd359acbfea5359b344bb5da8af447810b4a705854c12c" exitCode=0 Nov 27 17:32:27 crc kubenswrapper[4792]: I1127 17:32:27.108006 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"385b9d5c-07c2-40b5-99c2-6176a9611572","Type":"ContainerDied","Data":"4b16df6b903465b19f4b89aa1b93a684d3fd2ac91776d1d89e414d3ac08671e4"} Nov 27 17:32:27 crc kubenswrapper[4792]: I1127 17:32:27.108034 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"385b9d5c-07c2-40b5-99c2-6176a9611572","Type":"ContainerDied","Data":"42c704040b2e7114532a3840fab2c929298f43ca59facf7569112198b3ba3355"} Nov 27 17:32:27 crc kubenswrapper[4792]: I1127 17:32:27.108045 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"385b9d5c-07c2-40b5-99c2-6176a9611572","Type":"ContainerDied","Data":"ad401a8d08e41e8e46cd359acbfea5359b344bb5da8af447810b4a705854c12c"} Nov 27 17:32:27 crc kubenswrapper[4792]: I1127 17:32:27.498378 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-55455cb8cf-gtjxc"] Nov 27 17:32:27 crc kubenswrapper[4792]: W1127 17:32:27.506455 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ace987a_3f62_48ce_8c4b_b9c50cd2a29e.slice/crio-0987b59db61e8f38c77bde55c8dbeb4bf9d8aaeabc04949895076c2f45194087 WatchSource:0}: Error finding container 0987b59db61e8f38c77bde55c8dbeb4bf9d8aaeabc04949895076c2f45194087: Status 404 returned error can't find the container with id 0987b59db61e8f38c77bde55c8dbeb4bf9d8aaeabc04949895076c2f45194087 Nov 27 17:32:28 crc kubenswrapper[4792]: I1127 17:32:28.119186 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55455cb8cf-gtjxc" event={"ID":"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e","Type":"ContainerStarted","Data":"a8cf90fb7b075e1adfeffb75d322f6213d6bb60555ab039c0aa8f6fde72753fb"} Nov 27 17:32:28 crc kubenswrapper[4792]: I1127 17:32:28.119492 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55455cb8cf-gtjxc" event={"ID":"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e","Type":"ContainerStarted","Data":"9100d2754527b65862be3bf872f20f24b541206c5f423ea50054b74d8fd7ddae"} Nov 27 17:32:28 crc kubenswrapper[4792]: I1127 17:32:28.119508 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55455cb8cf-gtjxc" event={"ID":"9ace987a-3f62-48ce-8c4b-b9c50cd2a29e","Type":"ContainerStarted","Data":"0987b59db61e8f38c77bde55c8dbeb4bf9d8aaeabc04949895076c2f45194087"} Nov 27 17:32:28 crc kubenswrapper[4792]: I1127 17:32:28.119548 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:28 crc kubenswrapper[4792]: I1127 17:32:28.119570 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:28 crc kubenswrapper[4792]: I1127 17:32:28.170299 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-55455cb8cf-gtjxc" podStartSLOduration=2.170275445 podStartE2EDuration="2.170275445s" podCreationTimestamp="2025-11-27 17:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:32:28.157752833 +0000 UTC m=+1370.500579171" watchObservedRunningTime="2025-11-27 17:32:28.170275445 +0000 UTC m=+1370.513101763" Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.769463 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-t282f"] Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.771999 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-t282f" Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.779585 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-t282f"] Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.864639 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bb12910-da22-4f2b-85ba-31ea98c5ee73-operator-scripts\") pod \"nova-api-db-create-t282f\" (UID: \"8bb12910-da22-4f2b-85ba-31ea98c5ee73\") " pod="openstack/nova-api-db-create-t282f" Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.865010 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-774d7\" (UniqueName: \"kubernetes.io/projected/8bb12910-da22-4f2b-85ba-31ea98c5ee73-kube-api-access-774d7\") pod \"nova-api-db-create-t282f\" (UID: \"8bb12910-da22-4f2b-85ba-31ea98c5ee73\") " pod="openstack/nova-api-db-create-t282f" Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.876363 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-fgbxf"] Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.878285 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fgbxf" Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.887779 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-fgbxf"] Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.969041 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8647-account-create-update-njdbf"] Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.971585 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8647-account-create-update-njdbf" Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.969207 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6fpp\" (UniqueName: \"kubernetes.io/projected/bb418a81-4c30-47e4-8f2c-8ce1d96cbed9-kube-api-access-p6fpp\") pod \"nova-cell0-db-create-fgbxf\" (UID: \"bb418a81-4c30-47e4-8f2c-8ce1d96cbed9\") " pod="openstack/nova-cell0-db-create-fgbxf" Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.972906 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-774d7\" (UniqueName: \"kubernetes.io/projected/8bb12910-da22-4f2b-85ba-31ea98c5ee73-kube-api-access-774d7\") pod \"nova-api-db-create-t282f\" (UID: \"8bb12910-da22-4f2b-85ba-31ea98c5ee73\") " pod="openstack/nova-api-db-create-t282f" Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.973063 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bb12910-da22-4f2b-85ba-31ea98c5ee73-operator-scripts\") pod \"nova-api-db-create-t282f\" (UID: \"8bb12910-da22-4f2b-85ba-31ea98c5ee73\") " pod="openstack/nova-api-db-create-t282f" Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.973109 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb418a81-4c30-47e4-8f2c-8ce1d96cbed9-operator-scripts\") pod \"nova-cell0-db-create-fgbxf\" (UID: \"bb418a81-4c30-47e4-8f2c-8ce1d96cbed9\") " pod="openstack/nova-cell0-db-create-fgbxf" Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.973982 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bb12910-da22-4f2b-85ba-31ea98c5ee73-operator-scripts\") pod \"nova-api-db-create-t282f\" (UID: \"8bb12910-da22-4f2b-85ba-31ea98c5ee73\") " pod="openstack/nova-api-db-create-t282f" Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.976413 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.995265 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8647-account-create-update-njdbf"] Nov 27 17:32:30 crc kubenswrapper[4792]: I1127 17:32:30.997463 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-774d7\" (UniqueName: \"kubernetes.io/projected/8bb12910-da22-4f2b-85ba-31ea98c5ee73-kube-api-access-774d7\") pod \"nova-api-db-create-t282f\" (UID: \"8bb12910-da22-4f2b-85ba-31ea98c5ee73\") " pod="openstack/nova-api-db-create-t282f" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.069004 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mc9d5"] Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.070517 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mc9d5" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.075039 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57df2fa-2e89-48d4-86ae-cd332706de3f-operator-scripts\") pod \"nova-api-8647-account-create-update-njdbf\" (UID: \"b57df2fa-2e89-48d4-86ae-cd332706de3f\") " pod="openstack/nova-api-8647-account-create-update-njdbf" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.075095 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6fpp\" (UniqueName: \"kubernetes.io/projected/bb418a81-4c30-47e4-8f2c-8ce1d96cbed9-kube-api-access-p6fpp\") pod \"nova-cell0-db-create-fgbxf\" (UID: \"bb418a81-4c30-47e4-8f2c-8ce1d96cbed9\") " pod="openstack/nova-cell0-db-create-fgbxf" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.075214 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrjv4\" (UniqueName: \"kubernetes.io/projected/b57df2fa-2e89-48d4-86ae-cd332706de3f-kube-api-access-zrjv4\") pod \"nova-api-8647-account-create-update-njdbf\" (UID: \"b57df2fa-2e89-48d4-86ae-cd332706de3f\") " pod="openstack/nova-api-8647-account-create-update-njdbf" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.075258 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb418a81-4c30-47e4-8f2c-8ce1d96cbed9-operator-scripts\") pod \"nova-cell0-db-create-fgbxf\" (UID: \"bb418a81-4c30-47e4-8f2c-8ce1d96cbed9\") " pod="openstack/nova-cell0-db-create-fgbxf" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.075959 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb418a81-4c30-47e4-8f2c-8ce1d96cbed9-operator-scripts\") pod \"nova-cell0-db-create-fgbxf\" (UID: \"bb418a81-4c30-47e4-8f2c-8ce1d96cbed9\") " pod="openstack/nova-cell0-db-create-fgbxf" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.079846 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mc9d5"] Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.101258 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6fpp\" (UniqueName: \"kubernetes.io/projected/bb418a81-4c30-47e4-8f2c-8ce1d96cbed9-kube-api-access-p6fpp\") pod \"nova-cell0-db-create-fgbxf\" (UID: \"bb418a81-4c30-47e4-8f2c-8ce1d96cbed9\") " pod="openstack/nova-cell0-db-create-fgbxf" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.153673 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-t282f" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.177503 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f1287d1-aab3-4631-9aa7-f208aedcf915-operator-scripts\") pod \"nova-cell1-db-create-mc9d5\" (UID: \"7f1287d1-aab3-4631-9aa7-f208aedcf915\") " pod="openstack/nova-cell1-db-create-mc9d5" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.177785 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsx4m\" (UniqueName: \"kubernetes.io/projected/7f1287d1-aab3-4631-9aa7-f208aedcf915-kube-api-access-xsx4m\") pod \"nova-cell1-db-create-mc9d5\" (UID: \"7f1287d1-aab3-4631-9aa7-f208aedcf915\") " pod="openstack/nova-cell1-db-create-mc9d5" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.177930 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrjv4\" (UniqueName: \"kubernetes.io/projected/b57df2fa-2e89-48d4-86ae-cd332706de3f-kube-api-access-zrjv4\") pod \"nova-api-8647-account-create-update-njdbf\" (UID: \"b57df2fa-2e89-48d4-86ae-cd332706de3f\") " pod="openstack/nova-api-8647-account-create-update-njdbf" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.178122 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57df2fa-2e89-48d4-86ae-cd332706de3f-operator-scripts\") pod \"nova-api-8647-account-create-update-njdbf\" (UID: \"b57df2fa-2e89-48d4-86ae-cd332706de3f\") " pod="openstack/nova-api-8647-account-create-update-njdbf" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.179310 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57df2fa-2e89-48d4-86ae-cd332706de3f-operator-scripts\") pod \"nova-api-8647-account-create-update-njdbf\" (UID: \"b57df2fa-2e89-48d4-86ae-cd332706de3f\") " pod="openstack/nova-api-8647-account-create-update-njdbf" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.180911 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0ac2-account-create-update-wsmcm"] Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.183667 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ac2-account-create-update-wsmcm" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.186236 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.193673 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrjv4\" (UniqueName: \"kubernetes.io/projected/b57df2fa-2e89-48d4-86ae-cd332706de3f-kube-api-access-zrjv4\") pod \"nova-api-8647-account-create-update-njdbf\" (UID: \"b57df2fa-2e89-48d4-86ae-cd332706de3f\") " pod="openstack/nova-api-8647-account-create-update-njdbf" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.202029 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0ac2-account-create-update-wsmcm"] Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.230939 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fgbxf" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.269537 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f4e9-account-create-update-4td2w"] Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.271222 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f4e9-account-create-update-4td2w" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.273878 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.280494 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsx4m\" (UniqueName: \"kubernetes.io/projected/7f1287d1-aab3-4631-9aa7-f208aedcf915-kube-api-access-xsx4m\") pod \"nova-cell1-db-create-mc9d5\" (UID: \"7f1287d1-aab3-4631-9aa7-f208aedcf915\") " pod="openstack/nova-cell1-db-create-mc9d5" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.280535 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a6e9c50-f83f-4b2c-975d-38f300d84169-operator-scripts\") pod \"nova-cell0-0ac2-account-create-update-wsmcm\" (UID: \"9a6e9c50-f83f-4b2c-975d-38f300d84169\") " pod="openstack/nova-cell0-0ac2-account-create-update-wsmcm" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.280676 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knr78\" (UniqueName: \"kubernetes.io/projected/9a6e9c50-f83f-4b2c-975d-38f300d84169-kube-api-access-knr78\") pod \"nova-cell0-0ac2-account-create-update-wsmcm\" (UID: \"9a6e9c50-f83f-4b2c-975d-38f300d84169\") " pod="openstack/nova-cell0-0ac2-account-create-update-wsmcm" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.280779 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f1287d1-aab3-4631-9aa7-f208aedcf915-operator-scripts\") pod \"nova-cell1-db-create-mc9d5\" (UID: \"7f1287d1-aab3-4631-9aa7-f208aedcf915\") " pod="openstack/nova-cell1-db-create-mc9d5" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.281694 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f1287d1-aab3-4631-9aa7-f208aedcf915-operator-scripts\") pod \"nova-cell1-db-create-mc9d5\" (UID: \"7f1287d1-aab3-4631-9aa7-f208aedcf915\") " pod="openstack/nova-cell1-db-create-mc9d5" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.285734 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f4e9-account-create-update-4td2w"] Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.307015 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsx4m\" (UniqueName: \"kubernetes.io/projected/7f1287d1-aab3-4631-9aa7-f208aedcf915-kube-api-access-xsx4m\") pod \"nova-cell1-db-create-mc9d5\" (UID: \"7f1287d1-aab3-4631-9aa7-f208aedcf915\") " pod="openstack/nova-cell1-db-create-mc9d5" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.357061 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8647-account-create-update-njdbf" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.382232 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m74ll\" (UniqueName: \"kubernetes.io/projected/3d93a918-1e28-4147-a6f4-fdf1572c40c8-kube-api-access-m74ll\") pod \"nova-cell1-f4e9-account-create-update-4td2w\" (UID: \"3d93a918-1e28-4147-a6f4-fdf1572c40c8\") " pod="openstack/nova-cell1-f4e9-account-create-update-4td2w" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.382485 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knr78\" (UniqueName: \"kubernetes.io/projected/9a6e9c50-f83f-4b2c-975d-38f300d84169-kube-api-access-knr78\") pod \"nova-cell0-0ac2-account-create-update-wsmcm\" (UID: \"9a6e9c50-f83f-4b2c-975d-38f300d84169\") " pod="openstack/nova-cell0-0ac2-account-create-update-wsmcm" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.382726 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d93a918-1e28-4147-a6f4-fdf1572c40c8-operator-scripts\") pod \"nova-cell1-f4e9-account-create-update-4td2w\" (UID: \"3d93a918-1e28-4147-a6f4-fdf1572c40c8\") " pod="openstack/nova-cell1-f4e9-account-create-update-4td2w" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.382863 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a6e9c50-f83f-4b2c-975d-38f300d84169-operator-scripts\") pod \"nova-cell0-0ac2-account-create-update-wsmcm\" (UID: \"9a6e9c50-f83f-4b2c-975d-38f300d84169\") " pod="openstack/nova-cell0-0ac2-account-create-update-wsmcm" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.383434 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a6e9c50-f83f-4b2c-975d-38f300d84169-operator-scripts\") pod \"nova-cell0-0ac2-account-create-update-wsmcm\" (UID: \"9a6e9c50-f83f-4b2c-975d-38f300d84169\") " pod="openstack/nova-cell0-0ac2-account-create-update-wsmcm" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.390439 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mc9d5" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.399110 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knr78\" (UniqueName: \"kubernetes.io/projected/9a6e9c50-f83f-4b2c-975d-38f300d84169-kube-api-access-knr78\") pod \"nova-cell0-0ac2-account-create-update-wsmcm\" (UID: \"9a6e9c50-f83f-4b2c-975d-38f300d84169\") " pod="openstack/nova-cell0-0ac2-account-create-update-wsmcm" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.484944 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m74ll\" (UniqueName: \"kubernetes.io/projected/3d93a918-1e28-4147-a6f4-fdf1572c40c8-kube-api-access-m74ll\") pod \"nova-cell1-f4e9-account-create-update-4td2w\" (UID: \"3d93a918-1e28-4147-a6f4-fdf1572c40c8\") " pod="openstack/nova-cell1-f4e9-account-create-update-4td2w" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.485118 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d93a918-1e28-4147-a6f4-fdf1572c40c8-operator-scripts\") pod \"nova-cell1-f4e9-account-create-update-4td2w\" (UID: \"3d93a918-1e28-4147-a6f4-fdf1572c40c8\") " pod="openstack/nova-cell1-f4e9-account-create-update-4td2w" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.485884 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d93a918-1e28-4147-a6f4-fdf1572c40c8-operator-scripts\") pod \"nova-cell1-f4e9-account-create-update-4td2w\" (UID: \"3d93a918-1e28-4147-a6f4-fdf1572c40c8\") " pod="openstack/nova-cell1-f4e9-account-create-update-4td2w" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.501109 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m74ll\" (UniqueName: \"kubernetes.io/projected/3d93a918-1e28-4147-a6f4-fdf1572c40c8-kube-api-access-m74ll\") pod \"nova-cell1-f4e9-account-create-update-4td2w\" (UID: \"3d93a918-1e28-4147-a6f4-fdf1572c40c8\") " pod="openstack/nova-cell1-f4e9-account-create-update-4td2w" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.554229 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ac2-account-create-update-wsmcm" Nov 27 17:32:31 crc kubenswrapper[4792]: I1127 17:32:31.599355 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f4e9-account-create-update-4td2w" Nov 27 17:32:32 crc kubenswrapper[4792]: I1127 17:32:32.207816 4792 generic.go:334] "Generic (PLEG): container finished" podID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerID="9b747fe5686d277f3530c0f5bc248d5e907378aec673bff5cb5400dfface8180" exitCode=0 Nov 27 17:32:32 crc kubenswrapper[4792]: I1127 17:32:32.207858 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"385b9d5c-07c2-40b5-99c2-6176a9611572","Type":"ContainerDied","Data":"9b747fe5686d277f3530c0f5bc248d5e907378aec673bff5cb5400dfface8180"} Nov 27 17:32:32 crc kubenswrapper[4792]: I1127 17:32:32.309252 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.202:3000/\": dial tcp 10.217.0.202:3000: connect: connection refused" Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.462806 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.584985 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-scripts\") pod \"385b9d5c-07c2-40b5-99c2-6176a9611572\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.585355 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-combined-ca-bundle\") pod \"385b9d5c-07c2-40b5-99c2-6176a9611572\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.585562 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-sg-core-conf-yaml\") pod \"385b9d5c-07c2-40b5-99c2-6176a9611572\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.585601 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/385b9d5c-07c2-40b5-99c2-6176a9611572-log-httpd\") pod \"385b9d5c-07c2-40b5-99c2-6176a9611572\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.585687 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/385b9d5c-07c2-40b5-99c2-6176a9611572-run-httpd\") pod \"385b9d5c-07c2-40b5-99c2-6176a9611572\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.585727 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr4p8\" (UniqueName: \"kubernetes.io/projected/385b9d5c-07c2-40b5-99c2-6176a9611572-kube-api-access-gr4p8\") pod \"385b9d5c-07c2-40b5-99c2-6176a9611572\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.585748 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-config-data\") pod \"385b9d5c-07c2-40b5-99c2-6176a9611572\" (UID: \"385b9d5c-07c2-40b5-99c2-6176a9611572\") " Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.593941 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/385b9d5c-07c2-40b5-99c2-6176a9611572-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "385b9d5c-07c2-40b5-99c2-6176a9611572" (UID: "385b9d5c-07c2-40b5-99c2-6176a9611572"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.594287 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/385b9d5c-07c2-40b5-99c2-6176a9611572-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "385b9d5c-07c2-40b5-99c2-6176a9611572" (UID: "385b9d5c-07c2-40b5-99c2-6176a9611572"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.606272 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/385b9d5c-07c2-40b5-99c2-6176a9611572-kube-api-access-gr4p8" (OuterVolumeSpecName: "kube-api-access-gr4p8") pod "385b9d5c-07c2-40b5-99c2-6176a9611572" (UID: "385b9d5c-07c2-40b5-99c2-6176a9611572"). InnerVolumeSpecName "kube-api-access-gr4p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.624523 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-scripts" (OuterVolumeSpecName: "scripts") pod "385b9d5c-07c2-40b5-99c2-6176a9611572" (UID: "385b9d5c-07c2-40b5-99c2-6176a9611572"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.689467 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/385b9d5c-07c2-40b5-99c2-6176a9611572-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.689503 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/385b9d5c-07c2-40b5-99c2-6176a9611572-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.689516 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr4p8\" (UniqueName: \"kubernetes.io/projected/385b9d5c-07c2-40b5-99c2-6176a9611572-kube-api-access-gr4p8\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.689527 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.797786 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "385b9d5c-07c2-40b5-99c2-6176a9611572" (UID: "385b9d5c-07c2-40b5-99c2-6176a9611572"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.804884 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "385b9d5c-07c2-40b5-99c2-6176a9611572" (UID: "385b9d5c-07c2-40b5-99c2-6176a9611572"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.883079 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-config-data" (OuterVolumeSpecName: "config-data") pod "385b9d5c-07c2-40b5-99c2-6176a9611572" (UID: "385b9d5c-07c2-40b5-99c2-6176a9611572"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.897224 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.897254 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:35 crc kubenswrapper[4792]: I1127 17:32:35.897266 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/385b9d5c-07c2-40b5-99c2-6176a9611572-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.221181 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-fgbxf"] Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.288432 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"385b9d5c-07c2-40b5-99c2-6176a9611572","Type":"ContainerDied","Data":"a20121fbfcfa5a08deefa59baca008ad54bcc8fea4f5936f36adb86333235a45"} Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.288483 4792 scope.go:117] "RemoveContainer" containerID="4b16df6b903465b19f4b89aa1b93a684d3fd2ac91776d1d89e414d3ac08671e4" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.288610 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.300237 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e88dd573-027f-458e-81ed-c133e141afb6","Type":"ContainerStarted","Data":"65926bebcd50d48f7a7df2bd0c8da60830500506a4341d1e002cdda3fd5216cf"} Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.316352 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f4e9-account-create-update-4td2w"] Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.324469 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fgbxf" event={"ID":"bb418a81-4c30-47e4-8f2c-8ce1d96cbed9","Type":"ContainerStarted","Data":"548b0405575fbb49623fb04782f49ff36e8dad6dedc69e63d8c7b3b09681f067"} Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.346343 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0ac2-account-create-update-wsmcm"] Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.375113 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8647-account-create-update-njdbf"] Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.382181 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.4774481010000002 podStartE2EDuration="14.38216181s" podCreationTimestamp="2025-11-27 17:32:22 +0000 UTC" firstStartedPulling="2025-11-27 17:32:23.53730944 +0000 UTC m=+1365.880135758" lastFinishedPulling="2025-11-27 17:32:35.442023149 +0000 UTC m=+1377.784849467" observedRunningTime="2025-11-27 17:32:36.318180196 +0000 UTC m=+1378.661006514" watchObservedRunningTime="2025-11-27 17:32:36.38216181 +0000 UTC m=+1378.724988128" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.456870 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-t282f"] Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.467094 4792 scope.go:117] "RemoveContainer" containerID="42c704040b2e7114532a3840fab2c929298f43ca59facf7569112198b3ba3355" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.501707 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mc9d5"] Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.520727 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.523643 4792 scope.go:117] "RemoveContainer" containerID="9b747fe5686d277f3530c0f5bc248d5e907378aec673bff5cb5400dfface8180" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.545172 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.567649 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:36 crc kubenswrapper[4792]: E1127 17:32:36.568174 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="ceilometer-notification-agent" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.568197 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="ceilometer-notification-agent" Nov 27 17:32:36 crc kubenswrapper[4792]: E1127 17:32:36.568222 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="ceilometer-central-agent" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.568230 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="ceilometer-central-agent" Nov 27 17:32:36 crc kubenswrapper[4792]: E1127 17:32:36.568254 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="sg-core" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.568262 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="sg-core" Nov 27 17:32:36 crc kubenswrapper[4792]: E1127 17:32:36.568291 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="proxy-httpd" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.568300 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="proxy-httpd" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.568516 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="ceilometer-central-agent" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.568539 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="proxy-httpd" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.568561 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="sg-core" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.568574 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" containerName="ceilometer-notification-agent" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.571253 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.574022 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.574700 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.577308 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.591286 4792 scope.go:117] "RemoveContainer" containerID="ad401a8d08e41e8e46cd359acbfea5359b344bb5da8af447810b4a705854c12c" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.703454 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="385b9d5c-07c2-40b5-99c2-6176a9611572" path="/var/lib/kubelet/pods/385b9d5c-07c2-40b5-99c2-6176a9611572/volumes" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.714473 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.714505 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98m4x\" (UniqueName: \"kubernetes.io/projected/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-kube-api-access-98m4x\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.714555 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-scripts\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.714608 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-log-httpd\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.714684 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.714709 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-run-httpd\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.714744 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-config-data\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.816136 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-config-data\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.816244 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.816265 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98m4x\" (UniqueName: \"kubernetes.io/projected/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-kube-api-access-98m4x\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.816308 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-scripts\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.816372 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-log-httpd\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.816426 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.816452 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-run-httpd\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.816872 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-run-httpd\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.818871 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-log-httpd\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.842726 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98m4x\" (UniqueName: \"kubernetes.io/projected/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-kube-api-access-98m4x\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.845324 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.846681 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.846838 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-config-data\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.851114 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-scripts\") pod \"ceilometer-0\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.898570 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.901164 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:32:36 crc kubenswrapper[4792]: I1127 17:32:36.920733 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-55455cb8cf-gtjxc" Nov 27 17:32:37 crc kubenswrapper[4792]: E1127 17:32:37.067736 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb418a81_4c30_47e4_8f2c_8ce1d96cbed9.slice/crio-ecbedbd0813b3998d2a87baf3f7f1f4e7cf69884771ad95054b72a8c2205f852.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb418a81_4c30_47e4_8f2c_8ce1d96cbed9.slice/crio-conmon-ecbedbd0813b3998d2a87baf3f7f1f4e7cf69884771ad95054b72a8c2205f852.scope\": RecentStats: unable to find data in memory cache]" Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.321815 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.322045 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6ce88c22-48ff-4c20-a73e-27324f35f70d" containerName="glance-log" containerID="cri-o://6a257ca868ab0152cd7d8dcb1dca62d0a957f0eb7dcfce7952e2a079a36fd21d" gracePeriod=30 Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.322470 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6ce88c22-48ff-4c20-a73e-27324f35f70d" containerName="glance-httpd" containerID="cri-o://746ac7acbc18a256ac4ca5c59c24d6225888e2d1e15ee7044e493b4ff783937b" gracePeriod=30 Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.341827 4792 generic.go:334] "Generic (PLEG): container finished" podID="bb418a81-4c30-47e4-8f2c-8ce1d96cbed9" containerID="ecbedbd0813b3998d2a87baf3f7f1f4e7cf69884771ad95054b72a8c2205f852" exitCode=0 Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.342279 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fgbxf" event={"ID":"bb418a81-4c30-47e4-8f2c-8ce1d96cbed9","Type":"ContainerDied","Data":"ecbedbd0813b3998d2a87baf3f7f1f4e7cf69884771ad95054b72a8c2205f852"} Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.344681 4792 generic.go:334] "Generic (PLEG): container finished" podID="8bb12910-da22-4f2b-85ba-31ea98c5ee73" containerID="ab19081fce7ec06833f4e88cfa92673bfef58d9820f8c81f5280bc7283115f0e" exitCode=0 Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.344761 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-t282f" event={"ID":"8bb12910-da22-4f2b-85ba-31ea98c5ee73","Type":"ContainerDied","Data":"ab19081fce7ec06833f4e88cfa92673bfef58d9820f8c81f5280bc7283115f0e"} Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.344789 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-t282f" event={"ID":"8bb12910-da22-4f2b-85ba-31ea98c5ee73","Type":"ContainerStarted","Data":"0aa89aa178fab91e98daec8fcead4b5af34bcac7ba5e2123007dd2815c2cfb0c"} Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.353514 4792 generic.go:334] "Generic (PLEG): container finished" podID="9a6e9c50-f83f-4b2c-975d-38f300d84169" containerID="d93942c6b2a9b644992c9f40071802dae48fcdf1065c2c531db3e6e85667fad2" exitCode=0 Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.353591 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0ac2-account-create-update-wsmcm" event={"ID":"9a6e9c50-f83f-4b2c-975d-38f300d84169","Type":"ContainerDied","Data":"d93942c6b2a9b644992c9f40071802dae48fcdf1065c2c531db3e6e85667fad2"} Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.353616 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0ac2-account-create-update-wsmcm" event={"ID":"9a6e9c50-f83f-4b2c-975d-38f300d84169","Type":"ContainerStarted","Data":"ecbf770efdf0f3fce2a840884e90383da87c8e28b3d18c826ced127924d6462c"} Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.363561 4792 generic.go:334] "Generic (PLEG): container finished" podID="b57df2fa-2e89-48d4-86ae-cd332706de3f" containerID="b66aa979ce4708ecea7cfde05719c4e12c56e096e38aa1147571c78b485a9077" exitCode=0 Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.363701 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8647-account-create-update-njdbf" event={"ID":"b57df2fa-2e89-48d4-86ae-cd332706de3f","Type":"ContainerDied","Data":"b66aa979ce4708ecea7cfde05719c4e12c56e096e38aa1147571c78b485a9077"} Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.363968 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8647-account-create-update-njdbf" event={"ID":"b57df2fa-2e89-48d4-86ae-cd332706de3f","Type":"ContainerStarted","Data":"71651b1ff52d387ffe25931e711af9fdbd7d149b96650d3347c9ce7ebd7fa0b2"} Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.368929 4792 generic.go:334] "Generic (PLEG): container finished" podID="7f1287d1-aab3-4631-9aa7-f208aedcf915" containerID="a0dca2840d61f83149dbfeaa0e3f16ccf530b80f5675edcf674161eacac706f0" exitCode=0 Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.369030 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mc9d5" event={"ID":"7f1287d1-aab3-4631-9aa7-f208aedcf915","Type":"ContainerDied","Data":"a0dca2840d61f83149dbfeaa0e3f16ccf530b80f5675edcf674161eacac706f0"} Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.369059 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mc9d5" event={"ID":"7f1287d1-aab3-4631-9aa7-f208aedcf915","Type":"ContainerStarted","Data":"4ed6f1b28b7c01c0a3415689d3865f080bd9844e3af907af637b9c7d3401e4ab"} Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.371037 4792 generic.go:334] "Generic (PLEG): container finished" podID="3d93a918-1e28-4147-a6f4-fdf1572c40c8" containerID="fee903b72898cc2dd1e1a0cfac38363611e16b778b15e906689fc4c2c2020fd5" exitCode=0 Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.371151 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f4e9-account-create-update-4td2w" event={"ID":"3d93a918-1e28-4147-a6f4-fdf1572c40c8","Type":"ContainerDied","Data":"fee903b72898cc2dd1e1a0cfac38363611e16b778b15e906689fc4c2c2020fd5"} Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.371182 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f4e9-account-create-update-4td2w" event={"ID":"3d93a918-1e28-4147-a6f4-fdf1572c40c8","Type":"ContainerStarted","Data":"58417d5aa55a626ff2f19aad7d5b84bb1573a7c7042f839970386f5ab58a516e"} Nov 27 17:32:37 crc kubenswrapper[4792]: I1127 17:32:37.508835 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:37 crc kubenswrapper[4792]: W1127 17:32:37.513171 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c08ff73_f9d9_4b1d_9c57_28721cecc81e.slice/crio-851b6c101bb9150d86b48b18623901b8a9e149139be9196496c21f166e4b728a WatchSource:0}: Error finding container 851b6c101bb9150d86b48b18623901b8a9e149139be9196496c21f166e4b728a: Status 404 returned error can't find the container with id 851b6c101bb9150d86b48b18623901b8a9e149139be9196496c21f166e4b728a Nov 27 17:32:38 crc kubenswrapper[4792]: I1127 17:32:38.382533 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c08ff73-f9d9-4b1d-9c57-28721cecc81e","Type":"ContainerStarted","Data":"04a61673651a81fc62fccb7fe3ea345ef967ff4d0d354ef71b443cb4a5761027"} Nov 27 17:32:38 crc kubenswrapper[4792]: I1127 17:32:38.383078 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c08ff73-f9d9-4b1d-9c57-28721cecc81e","Type":"ContainerStarted","Data":"851b6c101bb9150d86b48b18623901b8a9e149139be9196496c21f166e4b728a"} Nov 27 17:32:38 crc kubenswrapper[4792]: I1127 17:32:38.384439 4792 generic.go:334] "Generic (PLEG): container finished" podID="6ce88c22-48ff-4c20-a73e-27324f35f70d" containerID="6a257ca868ab0152cd7d8dcb1dca62d0a957f0eb7dcfce7952e2a079a36fd21d" exitCode=143 Nov 27 17:32:38 crc kubenswrapper[4792]: I1127 17:32:38.384516 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ce88c22-48ff-4c20-a73e-27324f35f70d","Type":"ContainerDied","Data":"6a257ca868ab0152cd7d8dcb1dca62d0a957f0eb7dcfce7952e2a079a36fd21d"} Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.083257 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8647-account-create-update-njdbf" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.192181 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrjv4\" (UniqueName: \"kubernetes.io/projected/b57df2fa-2e89-48d4-86ae-cd332706de3f-kube-api-access-zrjv4\") pod \"b57df2fa-2e89-48d4-86ae-cd332706de3f\" (UID: \"b57df2fa-2e89-48d4-86ae-cd332706de3f\") " Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.192485 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57df2fa-2e89-48d4-86ae-cd332706de3f-operator-scripts\") pod \"b57df2fa-2e89-48d4-86ae-cd332706de3f\" (UID: \"b57df2fa-2e89-48d4-86ae-cd332706de3f\") " Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.193553 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b57df2fa-2e89-48d4-86ae-cd332706de3f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b57df2fa-2e89-48d4-86ae-cd332706de3f" (UID: "b57df2fa-2e89-48d4-86ae-cd332706de3f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.207509 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b57df2fa-2e89-48d4-86ae-cd332706de3f-kube-api-access-zrjv4" (OuterVolumeSpecName: "kube-api-access-zrjv4") pod "b57df2fa-2e89-48d4-86ae-cd332706de3f" (UID: "b57df2fa-2e89-48d4-86ae-cd332706de3f"). InnerVolumeSpecName "kube-api-access-zrjv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.252051 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.252288 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ce42d3e2-e953-4283-81f3-855bfb27fd10" containerName="glance-log" containerID="cri-o://8399dcd499d9d6956b5bc3da39762b59f7f9d653bc63bf74d14a1350a226326e" gracePeriod=30 Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.252803 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ce42d3e2-e953-4283-81f3-855bfb27fd10" containerName="glance-httpd" containerID="cri-o://b7375688969c273dfa3d2439ca47b14b4c2e8a1b45b1476bf1bfdc61158fceef" gracePeriod=30 Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.295320 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b57df2fa-2e89-48d4-86ae-cd332706de3f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.295355 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrjv4\" (UniqueName: \"kubernetes.io/projected/b57df2fa-2e89-48d4-86ae-cd332706de3f-kube-api-access-zrjv4\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.419951 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8647-account-create-update-njdbf" event={"ID":"b57df2fa-2e89-48d4-86ae-cd332706de3f","Type":"ContainerDied","Data":"71651b1ff52d387ffe25931e711af9fdbd7d149b96650d3347c9ce7ebd7fa0b2"} Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.419999 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71651b1ff52d387ffe25931e711af9fdbd7d149b96650d3347c9ce7ebd7fa0b2" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.419977 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8647-account-create-update-njdbf" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.424956 4792 generic.go:334] "Generic (PLEG): container finished" podID="ce42d3e2-e953-4283-81f3-855bfb27fd10" containerID="8399dcd499d9d6956b5bc3da39762b59f7f9d653bc63bf74d14a1350a226326e" exitCode=143 Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.425012 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ce42d3e2-e953-4283-81f3-855bfb27fd10","Type":"ContainerDied","Data":"8399dcd499d9d6956b5bc3da39762b59f7f9d653bc63bf74d14a1350a226326e"} Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.486341 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ac2-account-create-update-wsmcm" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.563099 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f4e9-account-create-update-4td2w" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.571861 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fgbxf" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.604760 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-t282f" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.610361 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a6e9c50-f83f-4b2c-975d-38f300d84169-operator-scripts\") pod \"9a6e9c50-f83f-4b2c-975d-38f300d84169\" (UID: \"9a6e9c50-f83f-4b2c-975d-38f300d84169\") " Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.610535 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6fpp\" (UniqueName: \"kubernetes.io/projected/bb418a81-4c30-47e4-8f2c-8ce1d96cbed9-kube-api-access-p6fpp\") pod \"bb418a81-4c30-47e4-8f2c-8ce1d96cbed9\" (UID: \"bb418a81-4c30-47e4-8f2c-8ce1d96cbed9\") " Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.610719 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knr78\" (UniqueName: \"kubernetes.io/projected/9a6e9c50-f83f-4b2c-975d-38f300d84169-kube-api-access-knr78\") pod \"9a6e9c50-f83f-4b2c-975d-38f300d84169\" (UID: \"9a6e9c50-f83f-4b2c-975d-38f300d84169\") " Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.610867 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb418a81-4c30-47e4-8f2c-8ce1d96cbed9-operator-scripts\") pod \"bb418a81-4c30-47e4-8f2c-8ce1d96cbed9\" (UID: \"bb418a81-4c30-47e4-8f2c-8ce1d96cbed9\") " Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.611064 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d93a918-1e28-4147-a6f4-fdf1572c40c8-operator-scripts\") pod \"3d93a918-1e28-4147-a6f4-fdf1572c40c8\" (UID: \"3d93a918-1e28-4147-a6f4-fdf1572c40c8\") " Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.611145 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m74ll\" (UniqueName: \"kubernetes.io/projected/3d93a918-1e28-4147-a6f4-fdf1572c40c8-kube-api-access-m74ll\") pod \"3d93a918-1e28-4147-a6f4-fdf1572c40c8\" (UID: \"3d93a918-1e28-4147-a6f4-fdf1572c40c8\") " Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.629274 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d93a918-1e28-4147-a6f4-fdf1572c40c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d93a918-1e28-4147-a6f4-fdf1572c40c8" (UID: "3d93a918-1e28-4147-a6f4-fdf1572c40c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.629495 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6e9c50-f83f-4b2c-975d-38f300d84169-kube-api-access-knr78" (OuterVolumeSpecName: "kube-api-access-knr78") pod "9a6e9c50-f83f-4b2c-975d-38f300d84169" (UID: "9a6e9c50-f83f-4b2c-975d-38f300d84169"). InnerVolumeSpecName "kube-api-access-knr78". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.629666 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d93a918-1e28-4147-a6f4-fdf1572c40c8-kube-api-access-m74ll" (OuterVolumeSpecName: "kube-api-access-m74ll") pod "3d93a918-1e28-4147-a6f4-fdf1572c40c8" (UID: "3d93a918-1e28-4147-a6f4-fdf1572c40c8"). InnerVolumeSpecName "kube-api-access-m74ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.629778 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb418a81-4c30-47e4-8f2c-8ce1d96cbed9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb418a81-4c30-47e4-8f2c-8ce1d96cbed9" (UID: "bb418a81-4c30-47e4-8f2c-8ce1d96cbed9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.632198 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a6e9c50-f83f-4b2c-975d-38f300d84169-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a6e9c50-f83f-4b2c-975d-38f300d84169" (UID: "9a6e9c50-f83f-4b2c-975d-38f300d84169"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.635002 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mc9d5" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.636153 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb418a81-4c30-47e4-8f2c-8ce1d96cbed9-kube-api-access-p6fpp" (OuterVolumeSpecName: "kube-api-access-p6fpp") pod "bb418a81-4c30-47e4-8f2c-8ce1d96cbed9" (UID: "bb418a81-4c30-47e4-8f2c-8ce1d96cbed9"). InnerVolumeSpecName "kube-api-access-p6fpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.712400 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-774d7\" (UniqueName: \"kubernetes.io/projected/8bb12910-da22-4f2b-85ba-31ea98c5ee73-kube-api-access-774d7\") pod \"8bb12910-da22-4f2b-85ba-31ea98c5ee73\" (UID: \"8bb12910-da22-4f2b-85ba-31ea98c5ee73\") " Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.712539 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bb12910-da22-4f2b-85ba-31ea98c5ee73-operator-scripts\") pod \"8bb12910-da22-4f2b-85ba-31ea98c5ee73\" (UID: \"8bb12910-da22-4f2b-85ba-31ea98c5ee73\") " Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.712803 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsx4m\" (UniqueName: \"kubernetes.io/projected/7f1287d1-aab3-4631-9aa7-f208aedcf915-kube-api-access-xsx4m\") pod \"7f1287d1-aab3-4631-9aa7-f208aedcf915\" (UID: \"7f1287d1-aab3-4631-9aa7-f208aedcf915\") " Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.712854 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f1287d1-aab3-4631-9aa7-f208aedcf915-operator-scripts\") pod \"7f1287d1-aab3-4631-9aa7-f208aedcf915\" (UID: \"7f1287d1-aab3-4631-9aa7-f208aedcf915\") " Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.713117 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb12910-da22-4f2b-85ba-31ea98c5ee73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8bb12910-da22-4f2b-85ba-31ea98c5ee73" (UID: "8bb12910-da22-4f2b-85ba-31ea98c5ee73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.713366 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6fpp\" (UniqueName: \"kubernetes.io/projected/bb418a81-4c30-47e4-8f2c-8ce1d96cbed9-kube-api-access-p6fpp\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.713400 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knr78\" (UniqueName: \"kubernetes.io/projected/9a6e9c50-f83f-4b2c-975d-38f300d84169-kube-api-access-knr78\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.713411 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb418a81-4c30-47e4-8f2c-8ce1d96cbed9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.713422 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bb12910-da22-4f2b-85ba-31ea98c5ee73-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.713432 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d93a918-1e28-4147-a6f4-fdf1572c40c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.713441 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m74ll\" (UniqueName: \"kubernetes.io/projected/3d93a918-1e28-4147-a6f4-fdf1572c40c8-kube-api-access-m74ll\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.713474 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a6e9c50-f83f-4b2c-975d-38f300d84169-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.714086 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f1287d1-aab3-4631-9aa7-f208aedcf915-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f1287d1-aab3-4631-9aa7-f208aedcf915" (UID: "7f1287d1-aab3-4631-9aa7-f208aedcf915"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.716604 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f1287d1-aab3-4631-9aa7-f208aedcf915-kube-api-access-xsx4m" (OuterVolumeSpecName: "kube-api-access-xsx4m") pod "7f1287d1-aab3-4631-9aa7-f208aedcf915" (UID: "7f1287d1-aab3-4631-9aa7-f208aedcf915"). InnerVolumeSpecName "kube-api-access-xsx4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.718091 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb12910-da22-4f2b-85ba-31ea98c5ee73-kube-api-access-774d7" (OuterVolumeSpecName: "kube-api-access-774d7") pod "8bb12910-da22-4f2b-85ba-31ea98c5ee73" (UID: "8bb12910-da22-4f2b-85ba-31ea98c5ee73"). InnerVolumeSpecName "kube-api-access-774d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.815104 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsx4m\" (UniqueName: \"kubernetes.io/projected/7f1287d1-aab3-4631-9aa7-f208aedcf915-kube-api-access-xsx4m\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.815136 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f1287d1-aab3-4631-9aa7-f208aedcf915-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:39 crc kubenswrapper[4792]: I1127 17:32:39.815147 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-774d7\" (UniqueName: \"kubernetes.io/projected/8bb12910-da22-4f2b-85ba-31ea98c5ee73-kube-api-access-774d7\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.457542 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f4e9-account-create-update-4td2w" event={"ID":"3d93a918-1e28-4147-a6f4-fdf1572c40c8","Type":"ContainerDied","Data":"58417d5aa55a626ff2f19aad7d5b84bb1573a7c7042f839970386f5ab58a516e"} Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.458482 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58417d5aa55a626ff2f19aad7d5b84bb1573a7c7042f839970386f5ab58a516e" Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.457800 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f4e9-account-create-update-4td2w" Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.459623 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fgbxf" event={"ID":"bb418a81-4c30-47e4-8f2c-8ce1d96cbed9","Type":"ContainerDied","Data":"548b0405575fbb49623fb04782f49ff36e8dad6dedc69e63d8c7b3b09681f067"} Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.459690 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548b0405575fbb49623fb04782f49ff36e8dad6dedc69e63d8c7b3b09681f067" Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.459752 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fgbxf" Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.476286 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-t282f" event={"ID":"8bb12910-da22-4f2b-85ba-31ea98c5ee73","Type":"ContainerDied","Data":"0aa89aa178fab91e98daec8fcead4b5af34bcac7ba5e2123007dd2815c2cfb0c"} Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.476333 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aa89aa178fab91e98daec8fcead4b5af34bcac7ba5e2123007dd2815c2cfb0c" Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.476392 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-t282f" Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.481312 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c08ff73-f9d9-4b1d-9c57-28721cecc81e","Type":"ContainerStarted","Data":"46084b28937bcaa2cf6396a6154cb5cef5c0c492956f2579cc01a66dc6244c26"} Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.485006 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0ac2-account-create-update-wsmcm" event={"ID":"9a6e9c50-f83f-4b2c-975d-38f300d84169","Type":"ContainerDied","Data":"ecbf770efdf0f3fce2a840884e90383da87c8e28b3d18c826ced127924d6462c"} Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.485168 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecbf770efdf0f3fce2a840884e90383da87c8e28b3d18c826ced127924d6462c" Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.485313 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ac2-account-create-update-wsmcm" Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.490746 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mc9d5" event={"ID":"7f1287d1-aab3-4631-9aa7-f208aedcf915","Type":"ContainerDied","Data":"4ed6f1b28b7c01c0a3415689d3865f080bd9844e3af907af637b9c7d3401e4ab"} Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.490785 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed6f1b28b7c01c0a3415689d3865f080bd9844e3af907af637b9c7d3401e4ab" Nov 27 17:32:40 crc kubenswrapper[4792]: I1127 17:32:40.490846 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mc9d5" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.000399 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.247918 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.355574 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx6h4\" (UniqueName: \"kubernetes.io/projected/6ce88c22-48ff-4c20-a73e-27324f35f70d-kube-api-access-jx6h4\") pod \"6ce88c22-48ff-4c20-a73e-27324f35f70d\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.356014 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-public-tls-certs\") pod \"6ce88c22-48ff-4c20-a73e-27324f35f70d\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.356180 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ce88c22-48ff-4c20-a73e-27324f35f70d-logs\") pod \"6ce88c22-48ff-4c20-a73e-27324f35f70d\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.356605 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ce88c22-48ff-4c20-a73e-27324f35f70d-logs" (OuterVolumeSpecName: "logs") pod "6ce88c22-48ff-4c20-a73e-27324f35f70d" (UID: "6ce88c22-48ff-4c20-a73e-27324f35f70d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.356688 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-config-data\") pod \"6ce88c22-48ff-4c20-a73e-27324f35f70d\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.356711 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"6ce88c22-48ff-4c20-a73e-27324f35f70d\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.357053 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-scripts\") pod \"6ce88c22-48ff-4c20-a73e-27324f35f70d\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.357098 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ce88c22-48ff-4c20-a73e-27324f35f70d-httpd-run\") pod \"6ce88c22-48ff-4c20-a73e-27324f35f70d\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.357116 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-combined-ca-bundle\") pod \"6ce88c22-48ff-4c20-a73e-27324f35f70d\" (UID: \"6ce88c22-48ff-4c20-a73e-27324f35f70d\") " Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.360059 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ce88c22-48ff-4c20-a73e-27324f35f70d-logs\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.360891 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ce88c22-48ff-4c20-a73e-27324f35f70d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6ce88c22-48ff-4c20-a73e-27324f35f70d" (UID: "6ce88c22-48ff-4c20-a73e-27324f35f70d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.362231 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "6ce88c22-48ff-4c20-a73e-27324f35f70d" (UID: "6ce88c22-48ff-4c20-a73e-27324f35f70d"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.371870 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ce88c22-48ff-4c20-a73e-27324f35f70d-kube-api-access-jx6h4" (OuterVolumeSpecName: "kube-api-access-jx6h4") pod "6ce88c22-48ff-4c20-a73e-27324f35f70d" (UID: "6ce88c22-48ff-4c20-a73e-27324f35f70d"). InnerVolumeSpecName "kube-api-access-jx6h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.373430 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-scripts" (OuterVolumeSpecName: "scripts") pod "6ce88c22-48ff-4c20-a73e-27324f35f70d" (UID: "6ce88c22-48ff-4c20-a73e-27324f35f70d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.419571 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ce88c22-48ff-4c20-a73e-27324f35f70d" (UID: "6ce88c22-48ff-4c20-a73e-27324f35f70d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.424538 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6ce88c22-48ff-4c20-a73e-27324f35f70d" (UID: "6ce88c22-48ff-4c20-a73e-27324f35f70d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.448431 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-config-data" (OuterVolumeSpecName: "config-data") pod "6ce88c22-48ff-4c20-a73e-27324f35f70d" (UID: "6ce88c22-48ff-4c20-a73e-27324f35f70d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.462249 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx6h4\" (UniqueName: \"kubernetes.io/projected/6ce88c22-48ff-4c20-a73e-27324f35f70d-kube-api-access-jx6h4\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.462343 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.462354 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.462390 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.462400 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.462409 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ce88c22-48ff-4c20-a73e-27324f35f70d-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.462417 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ce88c22-48ff-4c20-a73e-27324f35f70d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.488591 4792 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.530956 4792 generic.go:334] "Generic (PLEG): container finished" podID="6ce88c22-48ff-4c20-a73e-27324f35f70d" containerID="746ac7acbc18a256ac4ca5c59c24d6225888e2d1e15ee7044e493b4ff783937b" exitCode=0 Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.531077 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ce88c22-48ff-4c20-a73e-27324f35f70d","Type":"ContainerDied","Data":"746ac7acbc18a256ac4ca5c59c24d6225888e2d1e15ee7044e493b4ff783937b"} Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.531109 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6ce88c22-48ff-4c20-a73e-27324f35f70d","Type":"ContainerDied","Data":"024bf9be0562df9c440626675994c586ffd2a8bd27d830af3246fa2bf535c66f"} Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.531127 4792 scope.go:117] "RemoveContainer" containerID="746ac7acbc18a256ac4ca5c59c24d6225888e2d1e15ee7044e493b4ff783937b" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.531304 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.560254 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c08ff73-f9d9-4b1d-9c57-28721cecc81e","Type":"ContainerStarted","Data":"65e1fa57a5a2b727047f62824011bdc7445757586db92af222ef02d1b6378714"} Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.564893 4792 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.585355 4792 scope.go:117] "RemoveContainer" containerID="6a257ca868ab0152cd7d8dcb1dca62d0a957f0eb7dcfce7952e2a079a36fd21d" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.599632 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.624139 4792 scope.go:117] "RemoveContainer" containerID="746ac7acbc18a256ac4ca5c59c24d6225888e2d1e15ee7044e493b4ff783937b" Nov 27 17:32:41 crc kubenswrapper[4792]: E1127 17:32:41.628077 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"746ac7acbc18a256ac4ca5c59c24d6225888e2d1e15ee7044e493b4ff783937b\": container with ID starting with 746ac7acbc18a256ac4ca5c59c24d6225888e2d1e15ee7044e493b4ff783937b not found: ID does not exist" containerID="746ac7acbc18a256ac4ca5c59c24d6225888e2d1e15ee7044e493b4ff783937b" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.628138 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"746ac7acbc18a256ac4ca5c59c24d6225888e2d1e15ee7044e493b4ff783937b"} err="failed to get container status \"746ac7acbc18a256ac4ca5c59c24d6225888e2d1e15ee7044e493b4ff783937b\": rpc error: code = NotFound desc = could not find container \"746ac7acbc18a256ac4ca5c59c24d6225888e2d1e15ee7044e493b4ff783937b\": container with ID starting with 746ac7acbc18a256ac4ca5c59c24d6225888e2d1e15ee7044e493b4ff783937b not found: ID does not exist" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.628165 4792 scope.go:117] "RemoveContainer" containerID="6a257ca868ab0152cd7d8dcb1dca62d0a957f0eb7dcfce7952e2a079a36fd21d" Nov 27 17:32:41 crc kubenswrapper[4792]: E1127 17:32:41.628616 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a257ca868ab0152cd7d8dcb1dca62d0a957f0eb7dcfce7952e2a079a36fd21d\": container with ID starting with 6a257ca868ab0152cd7d8dcb1dca62d0a957f0eb7dcfce7952e2a079a36fd21d not found: ID does not exist" containerID="6a257ca868ab0152cd7d8dcb1dca62d0a957f0eb7dcfce7952e2a079a36fd21d" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.628639 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a257ca868ab0152cd7d8dcb1dca62d0a957f0eb7dcfce7952e2a079a36fd21d"} err="failed to get container status \"6a257ca868ab0152cd7d8dcb1dca62d0a957f0eb7dcfce7952e2a079a36fd21d\": rpc error: code = NotFound desc = could not find container \"6a257ca868ab0152cd7d8dcb1dca62d0a957f0eb7dcfce7952e2a079a36fd21d\": container with ID starting with 6a257ca868ab0152cd7d8dcb1dca62d0a957f0eb7dcfce7952e2a079a36fd21d not found: ID does not exist" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.650430 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.670101 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 27 17:32:41 crc kubenswrapper[4792]: E1127 17:32:41.671103 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b57df2fa-2e89-48d4-86ae-cd332706de3f" containerName="mariadb-account-create-update" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671129 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b57df2fa-2e89-48d4-86ae-cd332706de3f" containerName="mariadb-account-create-update" Nov 27 17:32:41 crc kubenswrapper[4792]: E1127 17:32:41.671154 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb418a81-4c30-47e4-8f2c-8ce1d96cbed9" containerName="mariadb-database-create" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671160 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb418a81-4c30-47e4-8f2c-8ce1d96cbed9" containerName="mariadb-database-create" Nov 27 17:32:41 crc kubenswrapper[4792]: E1127 17:32:41.671173 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb12910-da22-4f2b-85ba-31ea98c5ee73" containerName="mariadb-database-create" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671180 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb12910-da22-4f2b-85ba-31ea98c5ee73" containerName="mariadb-database-create" Nov 27 17:32:41 crc kubenswrapper[4792]: E1127 17:32:41.671197 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6e9c50-f83f-4b2c-975d-38f300d84169" containerName="mariadb-account-create-update" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671203 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6e9c50-f83f-4b2c-975d-38f300d84169" containerName="mariadb-account-create-update" Nov 27 17:32:41 crc kubenswrapper[4792]: E1127 17:32:41.671225 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce88c22-48ff-4c20-a73e-27324f35f70d" containerName="glance-log" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671231 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce88c22-48ff-4c20-a73e-27324f35f70d" containerName="glance-log" Nov 27 17:32:41 crc kubenswrapper[4792]: E1127 17:32:41.671252 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f1287d1-aab3-4631-9aa7-f208aedcf915" containerName="mariadb-database-create" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671258 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f1287d1-aab3-4631-9aa7-f208aedcf915" containerName="mariadb-database-create" Nov 27 17:32:41 crc kubenswrapper[4792]: E1127 17:32:41.671273 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d93a918-1e28-4147-a6f4-fdf1572c40c8" containerName="mariadb-account-create-update" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671279 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d93a918-1e28-4147-a6f4-fdf1572c40c8" containerName="mariadb-account-create-update" Nov 27 17:32:41 crc kubenswrapper[4792]: E1127 17:32:41.671314 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce88c22-48ff-4c20-a73e-27324f35f70d" containerName="glance-httpd" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671321 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce88c22-48ff-4c20-a73e-27324f35f70d" containerName="glance-httpd" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671727 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b57df2fa-2e89-48d4-86ae-cd332706de3f" containerName="mariadb-account-create-update" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671753 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6e9c50-f83f-4b2c-975d-38f300d84169" containerName="mariadb-account-create-update" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671768 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f1287d1-aab3-4631-9aa7-f208aedcf915" containerName="mariadb-database-create" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671780 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb418a81-4c30-47e4-8f2c-8ce1d96cbed9" containerName="mariadb-database-create" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671794 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ce88c22-48ff-4c20-a73e-27324f35f70d" containerName="glance-log" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671815 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ce88c22-48ff-4c20-a73e-27324f35f70d" containerName="glance-httpd" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671826 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d93a918-1e28-4147-a6f4-fdf1572c40c8" containerName="mariadb-account-create-update" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.671845 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb12910-da22-4f2b-85ba-31ea98c5ee73" containerName="mariadb-database-create" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.674426 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.708754 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jxhlm"] Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.709195 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.709507 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.710183 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.726051 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.727242 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-n6z4s" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.727368 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.760770 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.771774 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4657ccb2-3806-41d0-932d-195b809345fd-scripts\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.771826 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4657ccb2-3806-41d0-932d-195b809345fd-config-data\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.771899 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4657ccb2-3806-41d0-932d-195b809345fd-logs\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.772037 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x6gf\" (UniqueName: \"kubernetes.io/projected/4657ccb2-3806-41d0-932d-195b809345fd-kube-api-access-9x6gf\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.772128 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-config-data\") pod \"nova-cell0-conductor-db-sync-jxhlm\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.772195 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-scripts\") pod \"nova-cell0-conductor-db-sync-jxhlm\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.772259 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.772288 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4657ccb2-3806-41d0-932d-195b809345fd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.772455 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4657ccb2-3806-41d0-932d-195b809345fd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.772538 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4657ccb2-3806-41d0-932d-195b809345fd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.772602 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkrzw\" (UniqueName: \"kubernetes.io/projected/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-kube-api-access-mkrzw\") pod \"nova-cell0-conductor-db-sync-jxhlm\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.772674 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jxhlm\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.809793 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jxhlm"] Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.874534 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-scripts\") pod \"nova-cell0-conductor-db-sync-jxhlm\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.874590 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.874612 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4657ccb2-3806-41d0-932d-195b809345fd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.874675 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4657ccb2-3806-41d0-932d-195b809345fd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.874710 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4657ccb2-3806-41d0-932d-195b809345fd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.874736 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkrzw\" (UniqueName: \"kubernetes.io/projected/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-kube-api-access-mkrzw\") pod \"nova-cell0-conductor-db-sync-jxhlm\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.874762 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jxhlm\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.874792 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4657ccb2-3806-41d0-932d-195b809345fd-scripts\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.874811 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4657ccb2-3806-41d0-932d-195b809345fd-config-data\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.874845 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4657ccb2-3806-41d0-932d-195b809345fd-logs\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.874845 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.876864 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x6gf\" (UniqueName: \"kubernetes.io/projected/4657ccb2-3806-41d0-932d-195b809345fd-kube-api-access-9x6gf\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.876981 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-config-data\") pod \"nova-cell0-conductor-db-sync-jxhlm\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.877420 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4657ccb2-3806-41d0-932d-195b809345fd-logs\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.877442 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4657ccb2-3806-41d0-932d-195b809345fd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.882292 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4657ccb2-3806-41d0-932d-195b809345fd-config-data\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.883429 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jxhlm\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.883928 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4657ccb2-3806-41d0-932d-195b809345fd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.884734 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4657ccb2-3806-41d0-932d-195b809345fd-scripts\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.886191 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-scripts\") pod \"nova-cell0-conductor-db-sync-jxhlm\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.886825 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4657ccb2-3806-41d0-932d-195b809345fd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.892955 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-config-data\") pod \"nova-cell0-conductor-db-sync-jxhlm\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.907863 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.915990 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x6gf\" (UniqueName: \"kubernetes.io/projected/4657ccb2-3806-41d0-932d-195b809345fd-kube-api-access-9x6gf\") pod \"glance-default-external-api-0\" (UID: \"4657ccb2-3806-41d0-932d-195b809345fd\") " pod="openstack/glance-default-external-api-0" Nov 27 17:32:41 crc kubenswrapper[4792]: I1127 17:32:41.916169 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkrzw\" (UniqueName: \"kubernetes.io/projected/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-kube-api-access-mkrzw\") pod \"nova-cell0-conductor-db-sync-jxhlm\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:32:42 crc kubenswrapper[4792]: I1127 17:32:42.026946 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 27 17:32:42 crc kubenswrapper[4792]: I1127 17:32:42.031089 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:32:42 crc kubenswrapper[4792]: I1127 17:32:42.583692 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c08ff73-f9d9-4b1d-9c57-28721cecc81e","Type":"ContainerStarted","Data":"ccbb595be23997fa632d67b56dca3ba96afb8516a7445da4c3df939b780a7dbb"} Nov 27 17:32:42 crc kubenswrapper[4792]: I1127 17:32:42.583853 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="ceilometer-central-agent" containerID="cri-o://04a61673651a81fc62fccb7fe3ea345ef967ff4d0d354ef71b443cb4a5761027" gracePeriod=30 Nov 27 17:32:42 crc kubenswrapper[4792]: I1127 17:32:42.583887 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 27 17:32:42 crc kubenswrapper[4792]: I1127 17:32:42.583947 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="sg-core" containerID="cri-o://65e1fa57a5a2b727047f62824011bdc7445757586db92af222ef02d1b6378714" gracePeriod=30 Nov 27 17:32:42 crc kubenswrapper[4792]: I1127 17:32:42.583991 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="ceilometer-notification-agent" containerID="cri-o://46084b28937bcaa2cf6396a6154cb5cef5c0c492956f2579cc01a66dc6244c26" gracePeriod=30 Nov 27 17:32:42 crc kubenswrapper[4792]: I1127 17:32:42.583960 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="proxy-httpd" containerID="cri-o://ccbb595be23997fa632d67b56dca3ba96afb8516a7445da4c3df939b780a7dbb" gracePeriod=30 Nov 27 17:32:42 crc kubenswrapper[4792]: I1127 17:32:42.609987 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.282756252 podStartE2EDuration="6.609961135s" podCreationTimestamp="2025-11-27 17:32:36 +0000 UTC" firstStartedPulling="2025-11-27 17:32:37.517234892 +0000 UTC m=+1379.860061210" lastFinishedPulling="2025-11-27 17:32:41.844439775 +0000 UTC m=+1384.187266093" observedRunningTime="2025-11-27 17:32:42.602586441 +0000 UTC m=+1384.945412759" watchObservedRunningTime="2025-11-27 17:32:42.609961135 +0000 UTC m=+1384.952787443" Nov 27 17:32:42 crc kubenswrapper[4792]: I1127 17:32:42.676224 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 27 17:32:42 crc kubenswrapper[4792]: I1127 17:32:42.734886 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ce88c22-48ff-4c20-a73e-27324f35f70d" path="/var/lib/kubelet/pods/6ce88c22-48ff-4c20-a73e-27324f35f70d/volumes" Nov 27 17:32:42 crc kubenswrapper[4792]: I1127 17:32:42.796841 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jxhlm"] Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.340141 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.395946 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-85bccd9657-54g5w"] Nov 27 17:32:43 crc kubenswrapper[4792]: E1127 17:32:43.396578 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce42d3e2-e953-4283-81f3-855bfb27fd10" containerName="glance-log" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.396599 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce42d3e2-e953-4283-81f3-855bfb27fd10" containerName="glance-log" Nov 27 17:32:43 crc kubenswrapper[4792]: E1127 17:32:43.396609 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce42d3e2-e953-4283-81f3-855bfb27fd10" containerName="glance-httpd" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.396615 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce42d3e2-e953-4283-81f3-855bfb27fd10" containerName="glance-httpd" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.396884 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce42d3e2-e953-4283-81f3-855bfb27fd10" containerName="glance-httpd" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.396902 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce42d3e2-e953-4283-81f3-855bfb27fd10" containerName="glance-log" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.397811 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.410388 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.410483 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rxjm\" (UniqueName: \"kubernetes.io/projected/ce42d3e2-e953-4283-81f3-855bfb27fd10-kube-api-access-9rxjm\") pod \"ce42d3e2-e953-4283-81f3-855bfb27fd10\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.410618 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-scripts\") pod \"ce42d3e2-e953-4283-81f3-855bfb27fd10\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.410720 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.410765 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-config-data\") pod \"ce42d3e2-e953-4283-81f3-855bfb27fd10\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.410802 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-combined-ca-bundle\") pod \"ce42d3e2-e953-4283-81f3-855bfb27fd10\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.410824 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ce42d3e2-e953-4283-81f3-855bfb27fd10\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.410843 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4s5ct" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.410848 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce42d3e2-e953-4283-81f3-855bfb27fd10-logs\") pod \"ce42d3e2-e953-4283-81f3-855bfb27fd10\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.410866 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-internal-tls-certs\") pod \"ce42d3e2-e953-4283-81f3-855bfb27fd10\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.410886 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce42d3e2-e953-4283-81f3-855bfb27fd10-httpd-run\") pod \"ce42d3e2-e953-4283-81f3-855bfb27fd10\" (UID: \"ce42d3e2-e953-4283-81f3-855bfb27fd10\") " Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.412031 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce42d3e2-e953-4283-81f3-855bfb27fd10-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ce42d3e2-e953-4283-81f3-855bfb27fd10" (UID: "ce42d3e2-e953-4283-81f3-855bfb27fd10"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.412333 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce42d3e2-e953-4283-81f3-855bfb27fd10-logs" (OuterVolumeSpecName: "logs") pod "ce42d3e2-e953-4283-81f3-855bfb27fd10" (UID: "ce42d3e2-e953-4283-81f3-855bfb27fd10"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.433166 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "ce42d3e2-e953-4283-81f3-855bfb27fd10" (UID: "ce42d3e2-e953-4283-81f3-855bfb27fd10"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.439818 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-scripts" (OuterVolumeSpecName: "scripts") pod "ce42d3e2-e953-4283-81f3-855bfb27fd10" (UID: "ce42d3e2-e953-4283-81f3-855bfb27fd10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.447868 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce42d3e2-e953-4283-81f3-855bfb27fd10-kube-api-access-9rxjm" (OuterVolumeSpecName: "kube-api-access-9rxjm") pod "ce42d3e2-e953-4283-81f3-855bfb27fd10" (UID: "ce42d3e2-e953-4283-81f3-855bfb27fd10"). InnerVolumeSpecName "kube-api-access-9rxjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.470041 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-85bccd9657-54g5w"] Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.513127 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-config-data\") pod \"heat-engine-85bccd9657-54g5w\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.513266 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-config-data-custom\") pod \"heat-engine-85bccd9657-54g5w\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.513322 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz64p\" (UniqueName: \"kubernetes.io/projected/48dc6ea7-886f-4f25-a954-635e730e6b81-kube-api-access-zz64p\") pod \"heat-engine-85bccd9657-54g5w\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.513363 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-combined-ca-bundle\") pod \"heat-engine-85bccd9657-54g5w\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.513477 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.513491 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce42d3e2-e953-4283-81f3-855bfb27fd10-logs\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.513501 4792 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce42d3e2-e953-4283-81f3-855bfb27fd10-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.513511 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rxjm\" (UniqueName: \"kubernetes.io/projected/ce42d3e2-e953-4283-81f3-855bfb27fd10-kube-api-access-9rxjm\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.513520 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.522284 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce42d3e2-e953-4283-81f3-855bfb27fd10" (UID: "ce42d3e2-e953-4283-81f3-855bfb27fd10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.583031 4792 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.600172 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lh72w"] Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.630941 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-config-data\") pod \"heat-engine-85bccd9657-54g5w\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.631061 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-config-data-custom\") pod \"heat-engine-85bccd9657-54g5w\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.631113 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz64p\" (UniqueName: \"kubernetes.io/projected/48dc6ea7-886f-4f25-a954-635e730e6b81-kube-api-access-zz64p\") pod \"heat-engine-85bccd9657-54g5w\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.631732 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-combined-ca-bundle\") pod \"heat-engine-85bccd9657-54g5w\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.631857 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.631911 4792 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.649785 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-combined-ca-bundle\") pod \"heat-engine-85bccd9657-54g5w\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.652923 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-config-data\") pod \"heat-engine-85bccd9657-54g5w\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.653885 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.657982 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-config-data-custom\") pod \"heat-engine-85bccd9657-54g5w\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.660956 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jxhlm" event={"ID":"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9","Type":"ContainerStarted","Data":"dd2f0002a8ad409fb3049207d7990896c31121ba2045d744ac99ffce037330a5"} Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.670687 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-c5fd6ddbf-xxrj5"] Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.683927 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz64p\" (UniqueName: \"kubernetes.io/projected/48dc6ea7-886f-4f25-a954-635e730e6b81-kube-api-access-zz64p\") pod \"heat-engine-85bccd9657-54g5w\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.684057 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-config-data" (OuterVolumeSpecName: "config-data") pod "ce42d3e2-e953-4283-81f3-855bfb27fd10" (UID: "ce42d3e2-e953-4283-81f3-855bfb27fd10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.690589 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.691468 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4657ccb2-3806-41d0-932d-195b809345fd","Type":"ContainerStarted","Data":"38519269c0794d838a79b2059bdfc95ed34cfd8da30e40c0e8169b5283d08358"} Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.693434 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.698282 4792 generic.go:334] "Generic (PLEG): container finished" podID="ce42d3e2-e953-4283-81f3-855bfb27fd10" containerID="b7375688969c273dfa3d2439ca47b14b4c2e8a1b45b1476bf1bfdc61158fceef" exitCode=0 Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.698389 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ce42d3e2-e953-4283-81f3-855bfb27fd10","Type":"ContainerDied","Data":"b7375688969c273dfa3d2439ca47b14b4c2e8a1b45b1476bf1bfdc61158fceef"} Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.698435 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ce42d3e2-e953-4283-81f3-855bfb27fd10","Type":"ContainerDied","Data":"58b16adbc8ba86185fc5b702b8d020e3a0245c75ad4a86a848104c801958dddb"} Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.698457 4792 scope.go:117] "RemoveContainer" containerID="b7375688969c273dfa3d2439ca47b14b4c2e8a1b45b1476bf1bfdc61158fceef" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.698680 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.705269 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ce42d3e2-e953-4283-81f3-855bfb27fd10" (UID: "ce42d3e2-e953-4283-81f3-855bfb27fd10"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.717140 4792 generic.go:334] "Generic (PLEG): container finished" podID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerID="ccbb595be23997fa632d67b56dca3ba96afb8516a7445da4c3df939b780a7dbb" exitCode=0 Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.717196 4792 generic.go:334] "Generic (PLEG): container finished" podID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerID="65e1fa57a5a2b727047f62824011bdc7445757586db92af222ef02d1b6378714" exitCode=2 Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.717207 4792 generic.go:334] "Generic (PLEG): container finished" podID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerID="46084b28937bcaa2cf6396a6154cb5cef5c0c492956f2579cc01a66dc6244c26" exitCode=0 Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.717229 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c08ff73-f9d9-4b1d-9c57-28721cecc81e","Type":"ContainerDied","Data":"ccbb595be23997fa632d67b56dca3ba96afb8516a7445da4c3df939b780a7dbb"} Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.717258 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c08ff73-f9d9-4b1d-9c57-28721cecc81e","Type":"ContainerDied","Data":"65e1fa57a5a2b727047f62824011bdc7445757586db92af222ef02d1b6378714"} Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.717269 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c08ff73-f9d9-4b1d-9c57-28721cecc81e","Type":"ContainerDied","Data":"46084b28937bcaa2cf6396a6154cb5cef5c0c492956f2579cc01a66dc6244c26"} Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.737022 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-combined-ca-bundle\") pod \"heat-cfnapi-c5fd6ddbf-xxrj5\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.738601 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-config-data-custom\") pod \"heat-cfnapi-c5fd6ddbf-xxrj5\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.738760 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqhbz\" (UniqueName: \"kubernetes.io/projected/5c065100-450b-4cf1-b831-86963871ed12-kube-api-access-gqhbz\") pod \"heat-cfnapi-c5fd6ddbf-xxrj5\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.738880 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.739041 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzcx6\" (UniqueName: \"kubernetes.io/projected/9c02eb12-16a2-4c2d-849f-0309fd114fd2-kube-api-access-xzcx6\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.739158 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.741860 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.742096 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-config\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.742203 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-config-data\") pod \"heat-cfnapi-c5fd6ddbf-xxrj5\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.742283 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-dns-svc\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.742620 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.742729 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce42d3e2-e953-4283-81f3-855bfb27fd10-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.748929 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-c5fd6ddbf-xxrj5"] Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.787954 4792 scope.go:117] "RemoveContainer" containerID="8399dcd499d9d6956b5bc3da39762b59f7f9d653bc63bf74d14a1350a226326e" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.817515 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lh72w"] Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.839624 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-597ccfb68f-zfwfs"] Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.842383 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.845259 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.845712 4792 scope.go:117] "RemoveContainer" containerID="b7375688969c273dfa3d2439ca47b14b4c2e8a1b45b1476bf1bfdc61158fceef" Nov 27 17:32:43 crc kubenswrapper[4792]: E1127 17:32:43.849325 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7375688969c273dfa3d2439ca47b14b4c2e8a1b45b1476bf1bfdc61158fceef\": container with ID starting with b7375688969c273dfa3d2439ca47b14b4c2e8a1b45b1476bf1bfdc61158fceef not found: ID does not exist" containerID="b7375688969c273dfa3d2439ca47b14b4c2e8a1b45b1476bf1bfdc61158fceef" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.849372 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7375688969c273dfa3d2439ca47b14b4c2e8a1b45b1476bf1bfdc61158fceef"} err="failed to get container status \"b7375688969c273dfa3d2439ca47b14b4c2e8a1b45b1476bf1bfdc61158fceef\": rpc error: code = NotFound desc = could not find container \"b7375688969c273dfa3d2439ca47b14b4c2e8a1b45b1476bf1bfdc61158fceef\": container with ID starting with b7375688969c273dfa3d2439ca47b14b4c2e8a1b45b1476bf1bfdc61158fceef not found: ID does not exist" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.849401 4792 scope.go:117] "RemoveContainer" containerID="8399dcd499d9d6956b5bc3da39762b59f7f9d653bc63bf74d14a1350a226326e" Nov 27 17:32:43 crc kubenswrapper[4792]: E1127 17:32:43.849856 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8399dcd499d9d6956b5bc3da39762b59f7f9d653bc63bf74d14a1350a226326e\": container with ID starting with 8399dcd499d9d6956b5bc3da39762b59f7f9d653bc63bf74d14a1350a226326e not found: ID does not exist" containerID="8399dcd499d9d6956b5bc3da39762b59f7f9d653bc63bf74d14a1350a226326e" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.849879 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8399dcd499d9d6956b5bc3da39762b59f7f9d653bc63bf74d14a1350a226326e"} err="failed to get container status \"8399dcd499d9d6956b5bc3da39762b59f7f9d653bc63bf74d14a1350a226326e\": rpc error: code = NotFound desc = could not find container \"8399dcd499d9d6956b5bc3da39762b59f7f9d653bc63bf74d14a1350a226326e\": container with ID starting with 8399dcd499d9d6956b5bc3da39762b59f7f9d653bc63bf74d14a1350a226326e not found: ID does not exist" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.853149 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.858838 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzcx6\" (UniqueName: \"kubernetes.io/projected/9c02eb12-16a2-4c2d-849f-0309fd114fd2-kube-api-access-xzcx6\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.859105 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.859446 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.859490 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-config\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.859535 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-config-data\") pod \"heat-cfnapi-c5fd6ddbf-xxrj5\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.859570 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-dns-svc\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.859776 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-combined-ca-bundle\") pod \"heat-cfnapi-c5fd6ddbf-xxrj5\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.859851 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-config-data-custom\") pod \"heat-cfnapi-c5fd6ddbf-xxrj5\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.860023 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqhbz\" (UniqueName: \"kubernetes.io/projected/5c065100-450b-4cf1-b831-86963871ed12-kube-api-access-gqhbz\") pod \"heat-cfnapi-c5fd6ddbf-xxrj5\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.860094 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.860503 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.861083 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.861398 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-dns-svc\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.862088 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.864358 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-config\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.866335 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-config-data-custom\") pod \"heat-cfnapi-c5fd6ddbf-xxrj5\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.867459 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-config-data\") pod \"heat-cfnapi-c5fd6ddbf-xxrj5\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.868278 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-combined-ca-bundle\") pod \"heat-cfnapi-c5fd6ddbf-xxrj5\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.890380 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzcx6\" (UniqueName: \"kubernetes.io/projected/9c02eb12-16a2-4c2d-849f-0309fd114fd2-kube-api-access-xzcx6\") pod \"dnsmasq-dns-7d978555f9-lh72w\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.921106 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-597ccfb68f-zfwfs"] Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.943638 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqhbz\" (UniqueName: \"kubernetes.io/projected/5c065100-450b-4cf1-b831-86963871ed12-kube-api-access-gqhbz\") pod \"heat-cfnapi-c5fd6ddbf-xxrj5\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.964440 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgqck\" (UniqueName: \"kubernetes.io/projected/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-kube-api-access-jgqck\") pod \"heat-api-597ccfb68f-zfwfs\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.965027 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-config-data\") pod \"heat-api-597ccfb68f-zfwfs\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.965256 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-config-data-custom\") pod \"heat-api-597ccfb68f-zfwfs\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:43 crc kubenswrapper[4792]: I1127 17:32:43.965344 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-combined-ca-bundle\") pod \"heat-api-597ccfb68f-zfwfs\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.049702 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.076444 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.078662 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-config-data\") pod \"heat-api-597ccfb68f-zfwfs\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.078767 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-config-data-custom\") pod \"heat-api-597ccfb68f-zfwfs\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.078786 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-combined-ca-bundle\") pod \"heat-api-597ccfb68f-zfwfs\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.078845 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgqck\" (UniqueName: \"kubernetes.io/projected/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-kube-api-access-jgqck\") pod \"heat-api-597ccfb68f-zfwfs\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.093187 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-combined-ca-bundle\") pod \"heat-api-597ccfb68f-zfwfs\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.096595 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-config-data\") pod \"heat-api-597ccfb68f-zfwfs\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.104485 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-config-data-custom\") pod \"heat-api-597ccfb68f-zfwfs\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.116580 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgqck\" (UniqueName: \"kubernetes.io/projected/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-kube-api-access-jgqck\") pod \"heat-api-597ccfb68f-zfwfs\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.134336 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.155482 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.186142 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.188461 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.199545 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.199805 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.215017 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.238192 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.297768 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d64731cc-a4fe-498e-9553-4f7f5fce34a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.299121 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc9pl\" (UniqueName: \"kubernetes.io/projected/d64731cc-a4fe-498e-9553-4f7f5fce34a2-kube-api-access-pc9pl\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.299246 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d64731cc-a4fe-498e-9553-4f7f5fce34a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.299531 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d64731cc-a4fe-498e-9553-4f7f5fce34a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.299675 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d64731cc-a4fe-498e-9553-4f7f5fce34a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.299820 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d64731cc-a4fe-498e-9553-4f7f5fce34a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.300003 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.300172 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d64731cc-a4fe-498e-9553-4f7f5fce34a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.424369 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc9pl\" (UniqueName: \"kubernetes.io/projected/d64731cc-a4fe-498e-9553-4f7f5fce34a2-kube-api-access-pc9pl\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.424430 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d64731cc-a4fe-498e-9553-4f7f5fce34a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.424520 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d64731cc-a4fe-498e-9553-4f7f5fce34a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.424541 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d64731cc-a4fe-498e-9553-4f7f5fce34a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.424569 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d64731cc-a4fe-498e-9553-4f7f5fce34a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.424610 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.424658 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d64731cc-a4fe-498e-9553-4f7f5fce34a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.424704 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d64731cc-a4fe-498e-9553-4f7f5fce34a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.426014 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d64731cc-a4fe-498e-9553-4f7f5fce34a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.426618 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.428294 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d64731cc-a4fe-498e-9553-4f7f5fce34a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.449941 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d64731cc-a4fe-498e-9553-4f7f5fce34a2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.452751 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d64731cc-a4fe-498e-9553-4f7f5fce34a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.467344 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d64731cc-a4fe-498e-9553-4f7f5fce34a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.473431 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc9pl\" (UniqueName: \"kubernetes.io/projected/d64731cc-a4fe-498e-9553-4f7f5fce34a2-kube-api-access-pc9pl\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.475473 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d64731cc-a4fe-498e-9553-4f7f5fce34a2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.521309 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"d64731cc-a4fe-498e-9553-4f7f5fce34a2\") " pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.562522 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.664205 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-85bccd9657-54g5w"] Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.870968 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce42d3e2-e953-4283-81f3-855bfb27fd10" path="/var/lib/kubelet/pods/ce42d3e2-e953-4283-81f3-855bfb27fd10/volumes" Nov 27 17:32:44 crc kubenswrapper[4792]: I1127 17:32:44.877959 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4657ccb2-3806-41d0-932d-195b809345fd","Type":"ContainerStarted","Data":"1c3375566b25e2feebc0bafd244e6adafeab7501cbf573458e9920046331e851"} Nov 27 17:32:45 crc kubenswrapper[4792]: I1127 17:32:45.329781 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-c5fd6ddbf-xxrj5"] Nov 27 17:32:45 crc kubenswrapper[4792]: W1127 17:32:45.343432 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c02eb12_16a2_4c2d_849f_0309fd114fd2.slice/crio-f568b9f087c4d319eb1f8d2984f59316cd9c94a8cfa7b48d3490e7ba1a6056ff WatchSource:0}: Error finding container f568b9f087c4d319eb1f8d2984f59316cd9c94a8cfa7b48d3490e7ba1a6056ff: Status 404 returned error can't find the container with id f568b9f087c4d319eb1f8d2984f59316cd9c94a8cfa7b48d3490e7ba1a6056ff Nov 27 17:32:45 crc kubenswrapper[4792]: I1127 17:32:45.363965 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lh72w"] Nov 27 17:32:45 crc kubenswrapper[4792]: I1127 17:32:45.468453 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-597ccfb68f-zfwfs"] Nov 27 17:32:45 crc kubenswrapper[4792]: W1127 17:32:45.515999 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71d08fed_4336_49f2_aae6_b3d3fff2a2e7.slice/crio-93984e761293b8a9502818d956056e32651aebff42acb664b383009e93e2c99d WatchSource:0}: Error finding container 93984e761293b8a9502818d956056e32651aebff42acb664b383009e93e2c99d: Status 404 returned error can't find the container with id 93984e761293b8a9502818d956056e32651aebff42acb664b383009e93e2c99d Nov 27 17:32:45 crc kubenswrapper[4792]: I1127 17:32:45.793614 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 27 17:32:45 crc kubenswrapper[4792]: W1127 17:32:45.804360 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd64731cc_a4fe_498e_9553_4f7f5fce34a2.slice/crio-81c76fc73b397ba915c4121c977619794d40a079bbce844b00a1d41c36f802f2 WatchSource:0}: Error finding container 81c76fc73b397ba915c4121c977619794d40a079bbce844b00a1d41c36f802f2: Status 404 returned error can't find the container with id 81c76fc73b397ba915c4121c977619794d40a079bbce844b00a1d41c36f802f2 Nov 27 17:32:45 crc kubenswrapper[4792]: I1127 17:32:45.904422 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-597ccfb68f-zfwfs" event={"ID":"71d08fed-4336-49f2-aae6-b3d3fff2a2e7","Type":"ContainerStarted","Data":"93984e761293b8a9502818d956056e32651aebff42acb664b383009e93e2c99d"} Nov 27 17:32:45 crc kubenswrapper[4792]: I1127 17:32:45.906698 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d64731cc-a4fe-498e-9553-4f7f5fce34a2","Type":"ContainerStarted","Data":"81c76fc73b397ba915c4121c977619794d40a079bbce844b00a1d41c36f802f2"} Nov 27 17:32:45 crc kubenswrapper[4792]: I1127 17:32:45.909222 4792 generic.go:334] "Generic (PLEG): container finished" podID="9c02eb12-16a2-4c2d-849f-0309fd114fd2" containerID="2243131349b87d5f62af0e945742d231ac9b363ca41738f5c2c3678c9865e26e" exitCode=0 Nov 27 17:32:45 crc kubenswrapper[4792]: I1127 17:32:45.909279 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-lh72w" event={"ID":"9c02eb12-16a2-4c2d-849f-0309fd114fd2","Type":"ContainerDied","Data":"2243131349b87d5f62af0e945742d231ac9b363ca41738f5c2c3678c9865e26e"} Nov 27 17:32:45 crc kubenswrapper[4792]: I1127 17:32:45.909305 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-lh72w" event={"ID":"9c02eb12-16a2-4c2d-849f-0309fd114fd2","Type":"ContainerStarted","Data":"f568b9f087c4d319eb1f8d2984f59316cd9c94a8cfa7b48d3490e7ba1a6056ff"} Nov 27 17:32:45 crc kubenswrapper[4792]: I1127 17:32:45.913961 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" event={"ID":"5c065100-450b-4cf1-b831-86963871ed12","Type":"ContainerStarted","Data":"9046e5a023bbd95eaf7408586cbc1d3ba30f39169415df44c9e7c4b0404ae6b9"} Nov 27 17:32:45 crc kubenswrapper[4792]: I1127 17:32:45.925578 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4657ccb2-3806-41d0-932d-195b809345fd","Type":"ContainerStarted","Data":"7633b4ccc41648d035575182ee0547fe93d95eb1c15e25e44786102ca4e984db"} Nov 27 17:32:45 crc kubenswrapper[4792]: I1127 17:32:45.936821 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-85bccd9657-54g5w" event={"ID":"48dc6ea7-886f-4f25-a954-635e730e6b81","Type":"ContainerStarted","Data":"397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3"} Nov 27 17:32:45 crc kubenswrapper[4792]: I1127 17:32:45.936873 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-85bccd9657-54g5w" event={"ID":"48dc6ea7-886f-4f25-a954-635e730e6b81","Type":"ContainerStarted","Data":"d0df677956216d53d9618d59e142196e6d5b76e0ab3f40c9fe3a76f33b76c924"} Nov 27 17:32:45 crc kubenswrapper[4792]: I1127 17:32:45.938079 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:32:46 crc kubenswrapper[4792]: I1127 17:32:46.021986 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-85bccd9657-54g5w" podStartSLOduration=3.021962317 podStartE2EDuration="3.021962317s" podCreationTimestamp="2025-11-27 17:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:32:45.956152717 +0000 UTC m=+1388.298979035" watchObservedRunningTime="2025-11-27 17:32:46.021962317 +0000 UTC m=+1388.364788635" Nov 27 17:32:46 crc kubenswrapper[4792]: I1127 17:32:46.026153 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.026141611 podStartE2EDuration="5.026141611s" podCreationTimestamp="2025-11-27 17:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:32:45.976030362 +0000 UTC m=+1388.318856700" watchObservedRunningTime="2025-11-27 17:32:46.026141611 +0000 UTC m=+1388.368967929" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.034547 4792 generic.go:334] "Generic (PLEG): container finished" podID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerID="04a61673651a81fc62fccb7fe3ea345ef967ff4d0d354ef71b443cb4a5761027" exitCode=0 Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.034598 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c08ff73-f9d9-4b1d-9c57-28721cecc81e","Type":"ContainerDied","Data":"04a61673651a81fc62fccb7fe3ea345ef967ff4d0d354ef71b443cb4a5761027"} Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.066052 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d64731cc-a4fe-498e-9553-4f7f5fce34a2","Type":"ContainerStarted","Data":"41718f413854da9cf89054f417daf2893535a97f22f00fbd924c1e820f2b122a"} Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.078341 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-lh72w" event={"ID":"9c02eb12-16a2-4c2d-849f-0309fd114fd2","Type":"ContainerStarted","Data":"f328ec11bbe068159b78754ea6a5913c12a4b2e5a52d5c36fb9c02e2925b7398"} Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.078907 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.126997 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d978555f9-lh72w" podStartSLOduration=4.126977668 podStartE2EDuration="4.126977668s" podCreationTimestamp="2025-11-27 17:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:32:47.114855716 +0000 UTC m=+1389.457682034" watchObservedRunningTime="2025-11-27 17:32:47.126977668 +0000 UTC m=+1389.469803986" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.476998 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.651135 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-run-httpd\") pod \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.651200 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-config-data\") pod \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.651229 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98m4x\" (UniqueName: \"kubernetes.io/projected/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-kube-api-access-98m4x\") pod \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.651268 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-sg-core-conf-yaml\") pod \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.651369 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-scripts\") pod \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.651762 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-log-httpd\") pod \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.651791 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-combined-ca-bundle\") pod \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\" (UID: \"6c08ff73-f9d9-4b1d-9c57-28721cecc81e\") " Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.651802 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6c08ff73-f9d9-4b1d-9c57-28721cecc81e" (UID: "6c08ff73-f9d9-4b1d-9c57-28721cecc81e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.652221 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.652478 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6c08ff73-f9d9-4b1d-9c57-28721cecc81e" (UID: "6c08ff73-f9d9-4b1d-9c57-28721cecc81e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.656773 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-kube-api-access-98m4x" (OuterVolumeSpecName: "kube-api-access-98m4x") pod "6c08ff73-f9d9-4b1d-9c57-28721cecc81e" (UID: "6c08ff73-f9d9-4b1d-9c57-28721cecc81e"). InnerVolumeSpecName "kube-api-access-98m4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.658202 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-scripts" (OuterVolumeSpecName: "scripts") pod "6c08ff73-f9d9-4b1d-9c57-28721cecc81e" (UID: "6c08ff73-f9d9-4b1d-9c57-28721cecc81e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.707798 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6c08ff73-f9d9-4b1d-9c57-28721cecc81e" (UID: "6c08ff73-f9d9-4b1d-9c57-28721cecc81e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.755244 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.755278 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98m4x\" (UniqueName: \"kubernetes.io/projected/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-kube-api-access-98m4x\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.755289 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.755299 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.817913 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c08ff73-f9d9-4b1d-9c57-28721cecc81e" (UID: "6c08ff73-f9d9-4b1d-9c57-28721cecc81e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.832399 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-config-data" (OuterVolumeSpecName: "config-data") pod "6c08ff73-f9d9-4b1d-9c57-28721cecc81e" (UID: "6c08ff73-f9d9-4b1d-9c57-28721cecc81e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.856965 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:47 crc kubenswrapper[4792]: I1127 17:32:47.856994 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c08ff73-f9d9-4b1d-9c57-28721cecc81e-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.114057 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.114066 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c08ff73-f9d9-4b1d-9c57-28721cecc81e","Type":"ContainerDied","Data":"851b6c101bb9150d86b48b18623901b8a9e149139be9196496c21f166e4b728a"} Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.114130 4792 scope.go:117] "RemoveContainer" containerID="ccbb595be23997fa632d67b56dca3ba96afb8516a7445da4c3df939b780a7dbb" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.125551 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d64731cc-a4fe-498e-9553-4f7f5fce34a2","Type":"ContainerStarted","Data":"ca9596d81e7b75b0fb026c47b17e49acc7e7dc3a506d58adb77f7dd973178519"} Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.197531 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.197512501 podStartE2EDuration="4.197512501s" podCreationTimestamp="2025-11-27 17:32:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:32:48.159738709 +0000 UTC m=+1390.502565027" watchObservedRunningTime="2025-11-27 17:32:48.197512501 +0000 UTC m=+1390.540338819" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.242003 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.265779 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.283531 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:48 crc kubenswrapper[4792]: E1127 17:32:48.284548 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="ceilometer-central-agent" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.284566 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="ceilometer-central-agent" Nov 27 17:32:48 crc kubenswrapper[4792]: E1127 17:32:48.284581 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="proxy-httpd" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.284586 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="proxy-httpd" Nov 27 17:32:48 crc kubenswrapper[4792]: E1127 17:32:48.284722 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="sg-core" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.284730 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="sg-core" Nov 27 17:32:48 crc kubenswrapper[4792]: E1127 17:32:48.284743 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="ceilometer-notification-agent" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.284749 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="ceilometer-notification-agent" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.284968 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="ceilometer-central-agent" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.284985 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="proxy-httpd" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.285016 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="ceilometer-notification-agent" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.285025 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" containerName="sg-core" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.288120 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.291177 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.291427 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.298797 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.369638 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-config-data\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.369691 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.369744 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.369772 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da906526-f35f-4512-991b-4aa75a7cd514-log-httpd\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.369799 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2ftc\" (UniqueName: \"kubernetes.io/projected/da906526-f35f-4512-991b-4aa75a7cd514-kube-api-access-z2ftc\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.369816 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-scripts\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.369860 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da906526-f35f-4512-991b-4aa75a7cd514-run-httpd\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.471681 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da906526-f35f-4512-991b-4aa75a7cd514-log-httpd\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.471735 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2ftc\" (UniqueName: \"kubernetes.io/projected/da906526-f35f-4512-991b-4aa75a7cd514-kube-api-access-z2ftc\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.471759 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-scripts\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.472115 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da906526-f35f-4512-991b-4aa75a7cd514-run-httpd\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.472232 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-config-data\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.472256 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.472318 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.472399 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da906526-f35f-4512-991b-4aa75a7cd514-log-httpd\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.474268 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da906526-f35f-4512-991b-4aa75a7cd514-run-httpd\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.483581 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.484029 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.495476 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-config-data\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.496056 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2ftc\" (UniqueName: \"kubernetes.io/projected/da906526-f35f-4512-991b-4aa75a7cd514-kube-api-access-z2ftc\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.496338 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-scripts\") pod \"ceilometer-0\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.689306 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.710692 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c08ff73-f9d9-4b1d-9c57-28721cecc81e" path="/var/lib/kubelet/pods/6c08ff73-f9d9-4b1d-9c57-28721cecc81e/volumes" Nov 27 17:32:48 crc kubenswrapper[4792]: I1127 17:32:48.932514 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:49 crc kubenswrapper[4792]: I1127 17:32:49.305904 4792 scope.go:117] "RemoveContainer" containerID="65e1fa57a5a2b727047f62824011bdc7445757586db92af222ef02d1b6378714" Nov 27 17:32:49 crc kubenswrapper[4792]: I1127 17:32:49.367709 4792 scope.go:117] "RemoveContainer" containerID="46084b28937bcaa2cf6396a6154cb5cef5c0c492956f2579cc01a66dc6244c26" Nov 27 17:32:49 crc kubenswrapper[4792]: I1127 17:32:49.524892 4792 scope.go:117] "RemoveContainer" containerID="04a61673651a81fc62fccb7fe3ea345ef967ff4d0d354ef71b443cb4a5761027" Nov 27 17:32:50 crc kubenswrapper[4792]: I1127 17:32:50.156361 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-597ccfb68f-zfwfs" event={"ID":"71d08fed-4336-49f2-aae6-b3d3fff2a2e7","Type":"ContainerStarted","Data":"4d0ec7302e01e42f6e4aa16cee2501282f1687de0b35e183a13564887b03d3ab"} Nov 27 17:32:50 crc kubenswrapper[4792]: I1127 17:32:50.158034 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:50 crc kubenswrapper[4792]: I1127 17:32:50.166187 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:32:50 crc kubenswrapper[4792]: I1127 17:32:50.193111 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" event={"ID":"5c065100-450b-4cf1-b831-86963871ed12","Type":"ContainerStarted","Data":"38337d12e41c3b97654f83157e3229f8f79a20cb9aaacb1def4e46cd16bc78a1"} Nov 27 17:32:50 crc kubenswrapper[4792]: I1127 17:32:50.193461 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:50 crc kubenswrapper[4792]: I1127 17:32:50.205305 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-597ccfb68f-zfwfs" podStartSLOduration=3.256904242 podStartE2EDuration="7.205279333s" podCreationTimestamp="2025-11-27 17:32:43 +0000 UTC" firstStartedPulling="2025-11-27 17:32:45.52016635 +0000 UTC m=+1387.862992668" lastFinishedPulling="2025-11-27 17:32:49.468541441 +0000 UTC m=+1391.811367759" observedRunningTime="2025-11-27 17:32:50.182968187 +0000 UTC m=+1392.525794505" watchObservedRunningTime="2025-11-27 17:32:50.205279333 +0000 UTC m=+1392.548105651" Nov 27 17:32:50 crc kubenswrapper[4792]: I1127 17:32:50.225007 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" podStartSLOduration=3.101112029 podStartE2EDuration="7.224982374s" podCreationTimestamp="2025-11-27 17:32:43 +0000 UTC" firstStartedPulling="2025-11-27 17:32:45.343486076 +0000 UTC m=+1387.686312394" lastFinishedPulling="2025-11-27 17:32:49.467356421 +0000 UTC m=+1391.810182739" observedRunningTime="2025-11-27 17:32:50.210423251 +0000 UTC m=+1392.553249569" watchObservedRunningTime="2025-11-27 17:32:50.224982374 +0000 UTC m=+1392.567808702" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.207239 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da906526-f35f-4512-991b-4aa75a7cd514","Type":"ContainerStarted","Data":"efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d"} Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.207540 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da906526-f35f-4512-991b-4aa75a7cd514","Type":"ContainerStarted","Data":"f8ecd432da967b1e318e479715a633634c78200e714ea584b8de30f6507396ab"} Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.258714 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6ccb4649c9-gt6v5"] Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.260271 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.274423 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-cdc8758cd-7x94h"] Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.275956 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.315697 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-584bf79d4f-9jdkj"] Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.321055 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.346089 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6ccb4649c9-gt6v5"] Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.370902 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-config-data-custom\") pod \"heat-engine-6ccb4649c9-gt6v5\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.371300 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-config-data\") pod \"heat-cfnapi-cdc8758cd-7x94h\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.371327 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-combined-ca-bundle\") pod \"heat-engine-6ccb4649c9-gt6v5\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.371372 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-config-data\") pod \"heat-engine-6ccb4649c9-gt6v5\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.371545 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-combined-ca-bundle\") pod \"heat-cfnapi-cdc8758cd-7x94h\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.371566 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-config-data-custom\") pod \"heat-cfnapi-cdc8758cd-7x94h\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.371586 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4lpb\" (UniqueName: \"kubernetes.io/projected/25b34285-7729-45d3-981e-8a0c47edb784-kube-api-access-b4lpb\") pod \"heat-cfnapi-cdc8758cd-7x94h\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.371625 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9kjp\" (UniqueName: \"kubernetes.io/projected/574d8fe9-d9e1-436f-863f-2245cbecd37a-kube-api-access-b9kjp\") pod \"heat-engine-6ccb4649c9-gt6v5\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.376097 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-cdc8758cd-7x94h"] Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.405014 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-584bf79d4f-9jdkj"] Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.474499 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-config-data-custom\") pod \"heat-api-584bf79d4f-9jdkj\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.474610 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-config-data\") pod \"heat-api-584bf79d4f-9jdkj\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.474656 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-combined-ca-bundle\") pod \"heat-cfnapi-cdc8758cd-7x94h\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.474676 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-config-data-custom\") pod \"heat-cfnapi-cdc8758cd-7x94h\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.474720 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4lpb\" (UniqueName: \"kubernetes.io/projected/25b34285-7729-45d3-981e-8a0c47edb784-kube-api-access-b4lpb\") pod \"heat-cfnapi-cdc8758cd-7x94h\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.474753 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9kjp\" (UniqueName: \"kubernetes.io/projected/574d8fe9-d9e1-436f-863f-2245cbecd37a-kube-api-access-b9kjp\") pod \"heat-engine-6ccb4649c9-gt6v5\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.474781 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbsdg\" (UniqueName: \"kubernetes.io/projected/bd0d0ef0-a6ab-40d7-9585-782096604d70-kube-api-access-vbsdg\") pod \"heat-api-584bf79d4f-9jdkj\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.474819 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-config-data-custom\") pod \"heat-engine-6ccb4649c9-gt6v5\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.474874 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-combined-ca-bundle\") pod \"heat-api-584bf79d4f-9jdkj\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.474956 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-config-data\") pod \"heat-cfnapi-cdc8758cd-7x94h\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.474972 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-combined-ca-bundle\") pod \"heat-engine-6ccb4649c9-gt6v5\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.474999 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-config-data\") pod \"heat-engine-6ccb4649c9-gt6v5\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.481337 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-config-data-custom\") pod \"heat-engine-6ccb4649c9-gt6v5\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.487465 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-combined-ca-bundle\") pod \"heat-cfnapi-cdc8758cd-7x94h\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.488419 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-config-data-custom\") pod \"heat-cfnapi-cdc8758cd-7x94h\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.490503 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-combined-ca-bundle\") pod \"heat-engine-6ccb4649c9-gt6v5\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.496926 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-config-data\") pod \"heat-cfnapi-cdc8758cd-7x94h\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.501341 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-config-data\") pod \"heat-engine-6ccb4649c9-gt6v5\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.504292 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9kjp\" (UniqueName: \"kubernetes.io/projected/574d8fe9-d9e1-436f-863f-2245cbecd37a-kube-api-access-b9kjp\") pod \"heat-engine-6ccb4649c9-gt6v5\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.525366 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4lpb\" (UniqueName: \"kubernetes.io/projected/25b34285-7729-45d3-981e-8a0c47edb784-kube-api-access-b4lpb\") pod \"heat-cfnapi-cdc8758cd-7x94h\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.577221 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-combined-ca-bundle\") pod \"heat-api-584bf79d4f-9jdkj\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.577362 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-config-data-custom\") pod \"heat-api-584bf79d4f-9jdkj\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.577418 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-config-data\") pod \"heat-api-584bf79d4f-9jdkj\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.577458 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbsdg\" (UniqueName: \"kubernetes.io/projected/bd0d0ef0-a6ab-40d7-9585-782096604d70-kube-api-access-vbsdg\") pod \"heat-api-584bf79d4f-9jdkj\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.583741 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-config-data-custom\") pod \"heat-api-584bf79d4f-9jdkj\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.584256 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-config-data\") pod \"heat-api-584bf79d4f-9jdkj\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.584266 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-combined-ca-bundle\") pod \"heat-api-584bf79d4f-9jdkj\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.593276 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbsdg\" (UniqueName: \"kubernetes.io/projected/bd0d0ef0-a6ab-40d7-9585-782096604d70-kube-api-access-vbsdg\") pod \"heat-api-584bf79d4f-9jdkj\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.600180 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.636518 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:32:51 crc kubenswrapper[4792]: I1127 17:32:51.653657 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:32:52 crc kubenswrapper[4792]: I1127 17:32:52.027151 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 27 17:32:52 crc kubenswrapper[4792]: I1127 17:32:52.027204 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 27 17:32:52 crc kubenswrapper[4792]: I1127 17:32:52.116366 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 27 17:32:52 crc kubenswrapper[4792]: I1127 17:32:52.116908 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 27 17:32:52 crc kubenswrapper[4792]: I1127 17:32:52.226725 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 27 17:32:52 crc kubenswrapper[4792]: I1127 17:32:52.226804 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.046441 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-597ccfb68f-zfwfs"] Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.065121 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-c5fd6ddbf-xxrj5"] Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.065339 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" podUID="5c065100-450b-4cf1-b831-86963871ed12" containerName="heat-cfnapi" containerID="cri-o://38337d12e41c3b97654f83157e3229f8f79a20cb9aaacb1def4e46cd16bc78a1" gracePeriod=60 Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.102516 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7c756f7b9-t4njz"] Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.104386 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.108015 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.108202 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.116991 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-688c7694b8-5sbvr"] Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.118480 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.120659 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.120834 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.140096 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7c756f7b9-t4njz"] Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.160351 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-688c7694b8-5sbvr"] Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.212303 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-config-data\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.212381 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-public-tls-certs\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.212426 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58qs4\" (UniqueName: \"kubernetes.io/projected/0dc95085-b126-48ff-b0ea-98682fbf66fd-kube-api-access-58qs4\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.212450 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-combined-ca-bundle\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.212472 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-internal-tls-certs\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.212670 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-combined-ca-bundle\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.212736 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-public-tls-certs\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.212877 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-config-data-custom\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.213017 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-config-data-custom\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.213057 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-internal-tls-certs\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.213134 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-config-data\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.213162 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pbnk\" (UniqueName: \"kubernetes.io/projected/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-kube-api-access-9pbnk\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.251516 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-597ccfb68f-zfwfs" podUID="71d08fed-4336-49f2-aae6-b3d3fff2a2e7" containerName="heat-api" containerID="cri-o://4d0ec7302e01e42f6e4aa16cee2501282f1687de0b35e183a13564887b03d3ab" gracePeriod=60 Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.315310 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-public-tls-certs\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.315467 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-config-data-custom\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.315562 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-config-data-custom\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.315588 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-internal-tls-certs\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.315663 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-config-data\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.315690 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pbnk\" (UniqueName: \"kubernetes.io/projected/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-kube-api-access-9pbnk\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.315858 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-config-data\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.315932 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-public-tls-certs\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.315981 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58qs4\" (UniqueName: \"kubernetes.io/projected/0dc95085-b126-48ff-b0ea-98682fbf66fd-kube-api-access-58qs4\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.316014 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-combined-ca-bundle\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.316040 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-internal-tls-certs\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.316928 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-combined-ca-bundle\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.321570 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-public-tls-certs\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.321952 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-internal-tls-certs\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.323897 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-config-data\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.324855 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-combined-ca-bundle\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.324996 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-config-data-custom\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.325941 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-config-data-custom\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.332145 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-internal-tls-certs\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.332232 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-public-tls-certs\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.333253 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-config-data\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.336400 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pbnk\" (UniqueName: \"kubernetes.io/projected/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-kube-api-access-9pbnk\") pod \"heat-api-7c756f7b9-t4njz\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.337140 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-combined-ca-bundle\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.342397 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58qs4\" (UniqueName: \"kubernetes.io/projected/0dc95085-b126-48ff-b0ea-98682fbf66fd-kube-api-access-58qs4\") pod \"heat-cfnapi-688c7694b8-5sbvr\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.440948 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:32:53 crc kubenswrapper[4792]: I1127 17:32:53.456483 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:32:54 crc kubenswrapper[4792]: I1127 17:32:54.052662 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:32:54 crc kubenswrapper[4792]: I1127 17:32:54.078851 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" podUID="5c065100-450b-4cf1-b831-86963871ed12" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.219:8000/healthcheck\": dial tcp 10.217.0.219:8000: connect: connection refused" Nov 27 17:32:54 crc kubenswrapper[4792]: I1127 17:32:54.159666 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-62fkd"] Nov 27 17:32:54 crc kubenswrapper[4792]: I1127 17:32:54.159962 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" podUID="9f38ffc0-9dc6-485a-835f-5d038444fa07" containerName="dnsmasq-dns" containerID="cri-o://2a4e10324be629cca9109f608bad0d982ac4c2ce4b73c0a6ca33fa53a6b66de3" gracePeriod=10 Nov 27 17:32:54 crc kubenswrapper[4792]: I1127 17:32:54.219115 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-597ccfb68f-zfwfs" podUID="71d08fed-4336-49f2-aae6-b3d3fff2a2e7" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.220:8004/healthcheck\": dial tcp 10.217.0.220:8004: connect: connection refused" Nov 27 17:32:54 crc kubenswrapper[4792]: I1127 17:32:54.260126 4792 generic.go:334] "Generic (PLEG): container finished" podID="71d08fed-4336-49f2-aae6-b3d3fff2a2e7" containerID="4d0ec7302e01e42f6e4aa16cee2501282f1687de0b35e183a13564887b03d3ab" exitCode=0 Nov 27 17:32:54 crc kubenswrapper[4792]: I1127 17:32:54.260298 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-597ccfb68f-zfwfs" event={"ID":"71d08fed-4336-49f2-aae6-b3d3fff2a2e7","Type":"ContainerDied","Data":"4d0ec7302e01e42f6e4aa16cee2501282f1687de0b35e183a13564887b03d3ab"} Nov 27 17:32:54 crc kubenswrapper[4792]: I1127 17:32:54.264931 4792 generic.go:334] "Generic (PLEG): container finished" podID="5c065100-450b-4cf1-b831-86963871ed12" containerID="38337d12e41c3b97654f83157e3229f8f79a20cb9aaacb1def4e46cd16bc78a1" exitCode=0 Nov 27 17:32:54 crc kubenswrapper[4792]: I1127 17:32:54.264978 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" event={"ID":"5c065100-450b-4cf1-b831-86963871ed12","Type":"ContainerDied","Data":"38337d12e41c3b97654f83157e3229f8f79a20cb9aaacb1def4e46cd16bc78a1"} Nov 27 17:32:54 crc kubenswrapper[4792]: I1127 17:32:54.563118 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 27 17:32:54 crc kubenswrapper[4792]: I1127 17:32:54.563367 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 27 17:32:54 crc kubenswrapper[4792]: I1127 17:32:54.631424 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 27 17:32:54 crc kubenswrapper[4792]: I1127 17:32:54.659051 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 27 17:32:55 crc kubenswrapper[4792]: I1127 17:32:55.014087 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 27 17:32:55 crc kubenswrapper[4792]: I1127 17:32:55.014252 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 27 17:32:55 crc kubenswrapper[4792]: I1127 17:32:55.026019 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 27 17:32:55 crc kubenswrapper[4792]: I1127 17:32:55.278604 4792 generic.go:334] "Generic (PLEG): container finished" podID="9f38ffc0-9dc6-485a-835f-5d038444fa07" containerID="2a4e10324be629cca9109f608bad0d982ac4c2ce4b73c0a6ca33fa53a6b66de3" exitCode=0 Nov 27 17:32:55 crc kubenswrapper[4792]: I1127 17:32:55.279842 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" event={"ID":"9f38ffc0-9dc6-485a-835f-5d038444fa07","Type":"ContainerDied","Data":"2a4e10324be629cca9109f608bad0d982ac4c2ce4b73c0a6ca33fa53a6b66de3"} Nov 27 17:32:55 crc kubenswrapper[4792]: I1127 17:32:55.280845 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 27 17:32:55 crc kubenswrapper[4792]: I1127 17:32:55.280890 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 27 17:32:57 crc kubenswrapper[4792]: I1127 17:32:57.303749 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 27 17:32:57 crc kubenswrapper[4792]: I1127 17:32:57.304047 4792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 27 17:32:57 crc kubenswrapper[4792]: I1127 17:32:57.864334 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 27 17:32:57 crc kubenswrapper[4792]: I1127 17:32:57.865122 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.548287 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.640464 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-ovsdbserver-sb\") pod \"9f38ffc0-9dc6-485a-835f-5d038444fa07\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.640500 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t56tw\" (UniqueName: \"kubernetes.io/projected/9f38ffc0-9dc6-485a-835f-5d038444fa07-kube-api-access-t56tw\") pod \"9f38ffc0-9dc6-485a-835f-5d038444fa07\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.640524 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-ovsdbserver-nb\") pod \"9f38ffc0-9dc6-485a-835f-5d038444fa07\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.640586 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-dns-svc\") pod \"9f38ffc0-9dc6-485a-835f-5d038444fa07\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.640612 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-config\") pod \"9f38ffc0-9dc6-485a-835f-5d038444fa07\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.640835 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-dns-swift-storage-0\") pod \"9f38ffc0-9dc6-485a-835f-5d038444fa07\" (UID: \"9f38ffc0-9dc6-485a-835f-5d038444fa07\") " Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.704450 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f38ffc0-9dc6-485a-835f-5d038444fa07-kube-api-access-t56tw" (OuterVolumeSpecName: "kube-api-access-t56tw") pod "9f38ffc0-9dc6-485a-835f-5d038444fa07" (UID: "9f38ffc0-9dc6-485a-835f-5d038444fa07"). InnerVolumeSpecName "kube-api-access-t56tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.771206 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t56tw\" (UniqueName: \"kubernetes.io/projected/9f38ffc0-9dc6-485a-835f-5d038444fa07-kube-api-access-t56tw\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.864194 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-688c7694b8-5sbvr"] Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.885428 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.922740 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-config" (OuterVolumeSpecName: "config") pod "9f38ffc0-9dc6-485a-835f-5d038444fa07" (UID: "9f38ffc0-9dc6-485a-835f-5d038444fa07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.927370 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9f38ffc0-9dc6-485a-835f-5d038444fa07" (UID: "9f38ffc0-9dc6-485a-835f-5d038444fa07"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.947042 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.990916 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-config-data\") pod \"5c065100-450b-4cf1-b831-86963871ed12\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.990994 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-combined-ca-bundle\") pod \"5c065100-450b-4cf1-b831-86963871ed12\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.991018 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-config-data-custom\") pod \"5c065100-450b-4cf1-b831-86963871ed12\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.991124 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqhbz\" (UniqueName: \"kubernetes.io/projected/5c065100-450b-4cf1-b831-86963871ed12-kube-api-access-gqhbz\") pod \"5c065100-450b-4cf1-b831-86963871ed12\" (UID: \"5c065100-450b-4cf1-b831-86963871ed12\") " Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.991938 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:32:59 crc kubenswrapper[4792]: I1127 17:32:59.991966 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.012346 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c065100-450b-4cf1-b831-86963871ed12-kube-api-access-gqhbz" (OuterVolumeSpecName: "kube-api-access-gqhbz") pod "5c065100-450b-4cf1-b831-86963871ed12" (UID: "5c065100-450b-4cf1-b831-86963871ed12"). InnerVolumeSpecName "kube-api-access-gqhbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.012966 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5c065100-450b-4cf1-b831-86963871ed12" (UID: "5c065100-450b-4cf1-b831-86963871ed12"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.024977 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9f38ffc0-9dc6-485a-835f-5d038444fa07" (UID: "9f38ffc0-9dc6-485a-835f-5d038444fa07"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.064189 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f38ffc0-9dc6-485a-835f-5d038444fa07" (UID: "9f38ffc0-9dc6-485a-835f-5d038444fa07"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.096035 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-config-data\") pod \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.096689 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgqck\" (UniqueName: \"kubernetes.io/projected/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-kube-api-access-jgqck\") pod \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.096953 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-combined-ca-bundle\") pod \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.096992 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-config-data-custom\") pod \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\" (UID: \"71d08fed-4336-49f2-aae6-b3d3fff2a2e7\") " Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.103989 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.104016 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqhbz\" (UniqueName: \"kubernetes.io/projected/5c065100-450b-4cf1-b831-86963871ed12-kube-api-access-gqhbz\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.104027 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.104038 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.106495 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-kube-api-access-jgqck" (OuterVolumeSpecName: "kube-api-access-jgqck") pod "71d08fed-4336-49f2-aae6-b3d3fff2a2e7" (UID: "71d08fed-4336-49f2-aae6-b3d3fff2a2e7"). InnerVolumeSpecName "kube-api-access-jgqck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.130560 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9f38ffc0-9dc6-485a-835f-5d038444fa07" (UID: "9f38ffc0-9dc6-485a-835f-5d038444fa07"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.155154 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "71d08fed-4336-49f2-aae6-b3d3fff2a2e7" (UID: "71d08fed-4336-49f2-aae6-b3d3fff2a2e7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.205612 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f38ffc0-9dc6-485a-835f-5d038444fa07-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.205667 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgqck\" (UniqueName: \"kubernetes.io/projected/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-kube-api-access-jgqck\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.205679 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.221972 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c065100-450b-4cf1-b831-86963871ed12" (UID: "5c065100-450b-4cf1-b831-86963871ed12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.231630 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-config-data" (OuterVolumeSpecName: "config-data") pod "5c065100-450b-4cf1-b831-86963871ed12" (UID: "5c065100-450b-4cf1-b831-86963871ed12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.274849 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71d08fed-4336-49f2-aae6-b3d3fff2a2e7" (UID: "71d08fed-4336-49f2-aae6-b3d3fff2a2e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.304969 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-584bf79d4f-9jdkj"] Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.308720 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.308764 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.308778 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c065100-450b-4cf1-b831-86963871ed12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.337593 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-config-data" (OuterVolumeSpecName: "config-data") pod "71d08fed-4336-49f2-aae6-b3d3fff2a2e7" (UID: "71d08fed-4336-49f2-aae6-b3d3fff2a2e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.365110 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-688c7694b8-5sbvr" event={"ID":"0dc95085-b126-48ff-b0ea-98682fbf66fd","Type":"ContainerStarted","Data":"2df5b3f75bc069eeb60234dfc6484d5113e8bdc489688ad997ba41c4c4349aff"} Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.367310 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jxhlm" event={"ID":"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9","Type":"ContainerStarted","Data":"d150a6bbbbf4de430746d37d3faeabd0a9dd0c6e7745b9a8e5786f1e0b786204"} Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.375497 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" event={"ID":"5c065100-450b-4cf1-b831-86963871ed12","Type":"ContainerDied","Data":"9046e5a023bbd95eaf7408586cbc1d3ba30f39169415df44c9e7c4b0404ae6b9"} Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.375545 4792 scope.go:117] "RemoveContainer" containerID="38337d12e41c3b97654f83157e3229f8f79a20cb9aaacb1def4e46cd16bc78a1" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.375683 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.385300 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-584bf79d4f-9jdkj" event={"ID":"bd0d0ef0-a6ab-40d7-9585-782096604d70","Type":"ContainerStarted","Data":"f2ad7ce2d49d319f082c87af3d851731ac793ece8cb6aa20f3e2058555e5d1f6"} Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.397543 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" event={"ID":"9f38ffc0-9dc6-485a-835f-5d038444fa07","Type":"ContainerDied","Data":"91e6f99403948ec8f258104ce1492e67b30a7a14e4dbd220e59eed7c9cfcf21b"} Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.397630 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.400822 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-jxhlm" podStartSLOduration=2.617677612 podStartE2EDuration="19.400800511s" podCreationTimestamp="2025-11-27 17:32:41 +0000 UTC" firstStartedPulling="2025-11-27 17:32:42.800291169 +0000 UTC m=+1385.143117477" lastFinishedPulling="2025-11-27 17:32:59.583414058 +0000 UTC m=+1401.926240376" observedRunningTime="2025-11-27 17:33:00.395404936 +0000 UTC m=+1402.738231254" watchObservedRunningTime="2025-11-27 17:33:00.400800511 +0000 UTC m=+1402.743626829" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.408480 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da906526-f35f-4512-991b-4aa75a7cd514","Type":"ContainerStarted","Data":"965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508"} Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.411693 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71d08fed-4336-49f2-aae6-b3d3fff2a2e7-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.420971 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-597ccfb68f-zfwfs" event={"ID":"71d08fed-4336-49f2-aae6-b3d3fff2a2e7","Type":"ContainerDied","Data":"93984e761293b8a9502818d956056e32651aebff42acb664b383009e93e2c99d"} Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.421196 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-597ccfb68f-zfwfs" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.438682 4792 scope.go:117] "RemoveContainer" containerID="2a4e10324be629cca9109f608bad0d982ac4c2ce4b73c0a6ca33fa53a6b66de3" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.445198 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-62fkd"] Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.463143 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-62fkd"] Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.477901 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-c5fd6ddbf-xxrj5"] Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.480738 4792 scope.go:117] "RemoveContainer" containerID="a20bce2a4495f2a0e91585b78eabb50e6832af3b03208ac1472e5f3b4b04cea7" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.489699 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-c5fd6ddbf-xxrj5"] Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.516054 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-597ccfb68f-zfwfs"] Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.536006 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-597ccfb68f-zfwfs"] Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.537714 4792 scope.go:117] "RemoveContainer" containerID="4d0ec7302e01e42f6e4aa16cee2501282f1687de0b35e183a13564887b03d3ab" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.549363 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-cdc8758cd-7x94h"] Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.611366 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7c756f7b9-t4njz"] Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.637080 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6ccb4649c9-gt6v5"] Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.702775 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c065100-450b-4cf1-b831-86963871ed12" path="/var/lib/kubelet/pods/5c065100-450b-4cf1-b831-86963871ed12/volumes" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.703304 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71d08fed-4336-49f2-aae6-b3d3fff2a2e7" path="/var/lib/kubelet/pods/71d08fed-4336-49f2-aae6-b3d3fff2a2e7/volumes" Nov 27 17:33:00 crc kubenswrapper[4792]: I1127 17:33:00.703807 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f38ffc0-9dc6-485a-835f-5d038444fa07" path="/var/lib/kubelet/pods/9f38ffc0-9dc6-485a-835f-5d038444fa07/volumes" Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.439494 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-688c7694b8-5sbvr" event={"ID":"0dc95085-b126-48ff-b0ea-98682fbf66fd","Type":"ContainerStarted","Data":"57553a92651567d50a8bd4658d3bb851f04958d643bf90406da72197b9a87647"} Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.440049 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.447570 4792 generic.go:334] "Generic (PLEG): container finished" podID="bd0d0ef0-a6ab-40d7-9585-782096604d70" containerID="a824f8679635145b603c35505ea1b0afe6c13101780a94bb54eaf62954748dae" exitCode=1 Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.447617 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-584bf79d4f-9jdkj" event={"ID":"bd0d0ef0-a6ab-40d7-9585-782096604d70","Type":"ContainerDied","Data":"a824f8679635145b603c35505ea1b0afe6c13101780a94bb54eaf62954748dae"} Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.448001 4792 scope.go:117] "RemoveContainer" containerID="a824f8679635145b603c35505ea1b0afe6c13101780a94bb54eaf62954748dae" Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.462217 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c756f7b9-t4njz" event={"ID":"d88b28ec-1e65-4b0c-b691-9c44bef0ef06","Type":"ContainerStarted","Data":"9e17a21ccbb0389e068575269c7a015a6b408876fae596783fcc71ee3bb192fb"} Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.464792 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c756f7b9-t4njz" event={"ID":"d88b28ec-1e65-4b0c-b691-9c44bef0ef06","Type":"ContainerStarted","Data":"268a5545eb3bdcc237d8d40fa7e80e25b2daad8364f07c631e400c1a0fdb6e90"} Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.464822 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.465474 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6ccb4649c9-gt6v5" event={"ID":"574d8fe9-d9e1-436f-863f-2245cbecd37a","Type":"ContainerStarted","Data":"aa5831d527f7932aff5b65b6acb8669f2cb961eb9d284a840ec60309f866b480"} Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.465503 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6ccb4649c9-gt6v5" event={"ID":"574d8fe9-d9e1-436f-863f-2245cbecd37a","Type":"ContainerStarted","Data":"8aff2f1b877037b50141c7c6d4cb0eafbe01b844fe8f067e030cf6f3676a3c93"} Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.466117 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.470695 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-688c7694b8-5sbvr" podStartSLOduration=8.470679316 podStartE2EDuration="8.470679316s" podCreationTimestamp="2025-11-27 17:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:01.455484297 +0000 UTC m=+1403.798310615" watchObservedRunningTime="2025-11-27 17:33:01.470679316 +0000 UTC m=+1403.813505634" Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.470870 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" event={"ID":"25b34285-7729-45d3-981e-8a0c47edb784","Type":"ContainerStarted","Data":"c28704938eef3353b1071cf0a3662dc14fd28e00ac687b0a266571d13c9647bc"} Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.470896 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" event={"ID":"25b34285-7729-45d3-981e-8a0c47edb784","Type":"ContainerStarted","Data":"50e6933bdc165bd23781fb59b6c3640df74452e688a0c3df511e9082c2c7d64b"} Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.470914 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.508975 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" podStartSLOduration=10.50895623 podStartE2EDuration="10.50895623s" podCreationTimestamp="2025-11-27 17:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:01.494477539 +0000 UTC m=+1403.837303867" watchObservedRunningTime="2025-11-27 17:33:01.50895623 +0000 UTC m=+1403.851782548" Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.536439 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7c756f7b9-t4njz" podStartSLOduration=8.536411145 podStartE2EDuration="8.536411145s" podCreationTimestamp="2025-11-27 17:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:01.518905158 +0000 UTC m=+1403.861731486" watchObservedRunningTime="2025-11-27 17:33:01.536411145 +0000 UTC m=+1403.879237483" Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.562515 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6ccb4649c9-gt6v5" podStartSLOduration=10.562491905 podStartE2EDuration="10.562491905s" podCreationTimestamp="2025-11-27 17:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:01.539176963 +0000 UTC m=+1403.882003281" watchObservedRunningTime="2025-11-27 17:33:01.562491905 +0000 UTC m=+1403.905318223" Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.654922 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:33:01 crc kubenswrapper[4792]: I1127 17:33:01.654969 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:33:02 crc kubenswrapper[4792]: I1127 17:33:02.498974 4792 generic.go:334] "Generic (PLEG): container finished" podID="25b34285-7729-45d3-981e-8a0c47edb784" containerID="c28704938eef3353b1071cf0a3662dc14fd28e00ac687b0a266571d13c9647bc" exitCode=1 Nov 27 17:33:02 crc kubenswrapper[4792]: I1127 17:33:02.499617 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" event={"ID":"25b34285-7729-45d3-981e-8a0c47edb784","Type":"ContainerDied","Data":"c28704938eef3353b1071cf0a3662dc14fd28e00ac687b0a266571d13c9647bc"} Nov 27 17:33:02 crc kubenswrapper[4792]: I1127 17:33:02.499680 4792 scope.go:117] "RemoveContainer" containerID="c28704938eef3353b1071cf0a3662dc14fd28e00ac687b0a266571d13c9647bc" Nov 27 17:33:02 crc kubenswrapper[4792]: I1127 17:33:02.503612 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-584bf79d4f-9jdkj" event={"ID":"bd0d0ef0-a6ab-40d7-9585-782096604d70","Type":"ContainerStarted","Data":"09ab4f0b2ef6bfeab28d638822a04703e9f277e60d51c0e54210d2202f80348e"} Nov 27 17:33:02 crc kubenswrapper[4792]: I1127 17:33:02.504170 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:33:02 crc kubenswrapper[4792]: I1127 17:33:02.514258 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da906526-f35f-4512-991b-4aa75a7cd514","Type":"ContainerStarted","Data":"db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199"} Nov 27 17:33:02 crc kubenswrapper[4792]: I1127 17:33:02.543385 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-584bf79d4f-9jdkj" podStartSLOduration=11.543366113 podStartE2EDuration="11.543366113s" podCreationTimestamp="2025-11-27 17:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:02.536726967 +0000 UTC m=+1404.879553295" watchObservedRunningTime="2025-11-27 17:33:02.543366113 +0000 UTC m=+1404.886192431" Nov 27 17:33:03 crc kubenswrapper[4792]: I1127 17:33:03.529318 4792 generic.go:334] "Generic (PLEG): container finished" podID="bd0d0ef0-a6ab-40d7-9585-782096604d70" containerID="09ab4f0b2ef6bfeab28d638822a04703e9f277e60d51c0e54210d2202f80348e" exitCode=1 Nov 27 17:33:03 crc kubenswrapper[4792]: I1127 17:33:03.530239 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-584bf79d4f-9jdkj" event={"ID":"bd0d0ef0-a6ab-40d7-9585-782096604d70","Type":"ContainerDied","Data":"09ab4f0b2ef6bfeab28d638822a04703e9f277e60d51c0e54210d2202f80348e"} Nov 27 17:33:03 crc kubenswrapper[4792]: I1127 17:33:03.531972 4792 scope.go:117] "RemoveContainer" containerID="09ab4f0b2ef6bfeab28d638822a04703e9f277e60d51c0e54210d2202f80348e" Nov 27 17:33:03 crc kubenswrapper[4792]: E1127 17:33:03.532860 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-584bf79d4f-9jdkj_openstack(bd0d0ef0-a6ab-40d7-9585-782096604d70)\"" pod="openstack/heat-api-584bf79d4f-9jdkj" podUID="bd0d0ef0-a6ab-40d7-9585-782096604d70" Nov 27 17:33:03 crc kubenswrapper[4792]: I1127 17:33:03.533143 4792 scope.go:117] "RemoveContainer" containerID="a824f8679635145b603c35505ea1b0afe6c13101780a94bb54eaf62954748dae" Nov 27 17:33:03 crc kubenswrapper[4792]: I1127 17:33:03.538797 4792 generic.go:334] "Generic (PLEG): container finished" podID="25b34285-7729-45d3-981e-8a0c47edb784" containerID="f1b477ba4fcfebc6193fd0c2eefeeb6ccc530817fbf3b916a7846ce3628f33f5" exitCode=1 Nov 27 17:33:03 crc kubenswrapper[4792]: I1127 17:33:03.540247 4792 scope.go:117] "RemoveContainer" containerID="f1b477ba4fcfebc6193fd0c2eefeeb6ccc530817fbf3b916a7846ce3628f33f5" Nov 27 17:33:03 crc kubenswrapper[4792]: E1127 17:33:03.540569 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-cdc8758cd-7x94h_openstack(25b34285-7729-45d3-981e-8a0c47edb784)\"" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" podUID="25b34285-7729-45d3-981e-8a0c47edb784" Nov 27 17:33:03 crc kubenswrapper[4792]: I1127 17:33:03.540842 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" event={"ID":"25b34285-7729-45d3-981e-8a0c47edb784","Type":"ContainerDied","Data":"f1b477ba4fcfebc6193fd0c2eefeeb6ccc530817fbf3b916a7846ce3628f33f5"} Nov 27 17:33:03 crc kubenswrapper[4792]: I1127 17:33:03.655853 4792 scope.go:117] "RemoveContainer" containerID="c28704938eef3353b1071cf0a3662dc14fd28e00ac687b0a266571d13c9647bc" Nov 27 17:33:03 crc kubenswrapper[4792]: I1127 17:33:03.889673 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:33:04 crc kubenswrapper[4792]: I1127 17:33:04.169872 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bb4fc677f-62fkd" podUID="9f38ffc0-9dc6-485a-835f-5d038444fa07" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.200:5353: i/o timeout" Nov 27 17:33:04 crc kubenswrapper[4792]: I1127 17:33:04.552500 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da906526-f35f-4512-991b-4aa75a7cd514","Type":"ContainerStarted","Data":"e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7"} Nov 27 17:33:04 crc kubenswrapper[4792]: I1127 17:33:04.552806 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 27 17:33:04 crc kubenswrapper[4792]: I1127 17:33:04.552789 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="proxy-httpd" containerID="cri-o://e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7" gracePeriod=30 Nov 27 17:33:04 crc kubenswrapper[4792]: I1127 17:33:04.552829 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="sg-core" containerID="cri-o://db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199" gracePeriod=30 Nov 27 17:33:04 crc kubenswrapper[4792]: I1127 17:33:04.552801 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="ceilometer-central-agent" containerID="cri-o://efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d" gracePeriod=30 Nov 27 17:33:04 crc kubenswrapper[4792]: I1127 17:33:04.552831 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="ceilometer-notification-agent" containerID="cri-o://965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508" gracePeriod=30 Nov 27 17:33:04 crc kubenswrapper[4792]: I1127 17:33:04.555340 4792 scope.go:117] "RemoveContainer" containerID="f1b477ba4fcfebc6193fd0c2eefeeb6ccc530817fbf3b916a7846ce3628f33f5" Nov 27 17:33:04 crc kubenswrapper[4792]: E1127 17:33:04.556016 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-cdc8758cd-7x94h_openstack(25b34285-7729-45d3-981e-8a0c47edb784)\"" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" podUID="25b34285-7729-45d3-981e-8a0c47edb784" Nov 27 17:33:04 crc kubenswrapper[4792]: I1127 17:33:04.565303 4792 scope.go:117] "RemoveContainer" containerID="09ab4f0b2ef6bfeab28d638822a04703e9f277e60d51c0e54210d2202f80348e" Nov 27 17:33:04 crc kubenswrapper[4792]: E1127 17:33:04.565700 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-584bf79d4f-9jdkj_openstack(bd0d0ef0-a6ab-40d7-9585-782096604d70)\"" pod="openstack/heat-api-584bf79d4f-9jdkj" podUID="bd0d0ef0-a6ab-40d7-9585-782096604d70" Nov 27 17:33:04 crc kubenswrapper[4792]: I1127 17:33:04.583745 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.807666266 podStartE2EDuration="16.583726907s" podCreationTimestamp="2025-11-27 17:32:48 +0000 UTC" firstStartedPulling="2025-11-27 17:32:50.192382171 +0000 UTC m=+1392.535208489" lastFinishedPulling="2025-11-27 17:33:03.968442812 +0000 UTC m=+1406.311269130" observedRunningTime="2025-11-27 17:33:04.577214735 +0000 UTC m=+1406.920041063" watchObservedRunningTime="2025-11-27 17:33:04.583726907 +0000 UTC m=+1406.926553225" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.471442 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.517134 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da906526-f35f-4512-991b-4aa75a7cd514-run-httpd\") pod \"da906526-f35f-4512-991b-4aa75a7cd514\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.517410 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da906526-f35f-4512-991b-4aa75a7cd514-log-httpd\") pod \"da906526-f35f-4512-991b-4aa75a7cd514\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.517458 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2ftc\" (UniqueName: \"kubernetes.io/projected/da906526-f35f-4512-991b-4aa75a7cd514-kube-api-access-z2ftc\") pod \"da906526-f35f-4512-991b-4aa75a7cd514\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.517632 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-combined-ca-bundle\") pod \"da906526-f35f-4512-991b-4aa75a7cd514\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.518161 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da906526-f35f-4512-991b-4aa75a7cd514-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "da906526-f35f-4512-991b-4aa75a7cd514" (UID: "da906526-f35f-4512-991b-4aa75a7cd514"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.518693 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da906526-f35f-4512-991b-4aa75a7cd514-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "da906526-f35f-4512-991b-4aa75a7cd514" (UID: "da906526-f35f-4512-991b-4aa75a7cd514"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.519829 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da906526-f35f-4512-991b-4aa75a7cd514-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.520168 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da906526-f35f-4512-991b-4aa75a7cd514-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.528009 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da906526-f35f-4512-991b-4aa75a7cd514-kube-api-access-z2ftc" (OuterVolumeSpecName: "kube-api-access-z2ftc") pod "da906526-f35f-4512-991b-4aa75a7cd514" (UID: "da906526-f35f-4512-991b-4aa75a7cd514"). InnerVolumeSpecName "kube-api-access-z2ftc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.600994 4792 generic.go:334] "Generic (PLEG): container finished" podID="da906526-f35f-4512-991b-4aa75a7cd514" containerID="e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7" exitCode=0 Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.601304 4792 generic.go:334] "Generic (PLEG): container finished" podID="da906526-f35f-4512-991b-4aa75a7cd514" containerID="db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199" exitCode=2 Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.601313 4792 generic.go:334] "Generic (PLEG): container finished" podID="da906526-f35f-4512-991b-4aa75a7cd514" containerID="965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508" exitCode=0 Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.601324 4792 generic.go:334] "Generic (PLEG): container finished" podID="da906526-f35f-4512-991b-4aa75a7cd514" containerID="efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d" exitCode=0 Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.601131 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.601054 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da906526-f35f-4512-991b-4aa75a7cd514","Type":"ContainerDied","Data":"e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7"} Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.602272 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da906526-f35f-4512-991b-4aa75a7cd514","Type":"ContainerDied","Data":"db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199"} Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.602352 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da906526-f35f-4512-991b-4aa75a7cd514","Type":"ContainerDied","Data":"965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508"} Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.602413 4792 scope.go:117] "RemoveContainer" containerID="e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.602422 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da906526-f35f-4512-991b-4aa75a7cd514","Type":"ContainerDied","Data":"efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d"} Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.602543 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da906526-f35f-4512-991b-4aa75a7cd514","Type":"ContainerDied","Data":"f8ecd432da967b1e318e479715a633634c78200e714ea584b8de30f6507396ab"} Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.611788 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da906526-f35f-4512-991b-4aa75a7cd514" (UID: "da906526-f35f-4512-991b-4aa75a7cd514"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.622266 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-sg-core-conf-yaml\") pod \"da906526-f35f-4512-991b-4aa75a7cd514\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.622329 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-scripts\") pod \"da906526-f35f-4512-991b-4aa75a7cd514\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.622356 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-config-data\") pod \"da906526-f35f-4512-991b-4aa75a7cd514\" (UID: \"da906526-f35f-4512-991b-4aa75a7cd514\") " Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.622935 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2ftc\" (UniqueName: \"kubernetes.io/projected/da906526-f35f-4512-991b-4aa75a7cd514-kube-api-access-z2ftc\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.622954 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.627820 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-scripts" (OuterVolumeSpecName: "scripts") pod "da906526-f35f-4512-991b-4aa75a7cd514" (UID: "da906526-f35f-4512-991b-4aa75a7cd514"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.628043 4792 scope.go:117] "RemoveContainer" containerID="db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.653105 4792 scope.go:117] "RemoveContainer" containerID="965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.664301 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "da906526-f35f-4512-991b-4aa75a7cd514" (UID: "da906526-f35f-4512-991b-4aa75a7cd514"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.696166 4792 scope.go:117] "RemoveContainer" containerID="efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.725336 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.725368 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.750818 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-config-data" (OuterVolumeSpecName: "config-data") pod "da906526-f35f-4512-991b-4aa75a7cd514" (UID: "da906526-f35f-4512-991b-4aa75a7cd514"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.827913 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da906526-f35f-4512-991b-4aa75a7cd514-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.836217 4792 scope.go:117] "RemoveContainer" containerID="e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7" Nov 27 17:33:05 crc kubenswrapper[4792]: E1127 17:33:05.836621 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7\": container with ID starting with e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7 not found: ID does not exist" containerID="e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.836665 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7"} err="failed to get container status \"e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7\": rpc error: code = NotFound desc = could not find container \"e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7\": container with ID starting with e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7 not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.836684 4792 scope.go:117] "RemoveContainer" containerID="db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199" Nov 27 17:33:05 crc kubenswrapper[4792]: E1127 17:33:05.836897 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199\": container with ID starting with db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199 not found: ID does not exist" containerID="db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.836914 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199"} err="failed to get container status \"db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199\": rpc error: code = NotFound desc = could not find container \"db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199\": container with ID starting with db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199 not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.836926 4792 scope.go:117] "RemoveContainer" containerID="965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508" Nov 27 17:33:05 crc kubenswrapper[4792]: E1127 17:33:05.837288 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508\": container with ID starting with 965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508 not found: ID does not exist" containerID="965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.837304 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508"} err="failed to get container status \"965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508\": rpc error: code = NotFound desc = could not find container \"965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508\": container with ID starting with 965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508 not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.837318 4792 scope.go:117] "RemoveContainer" containerID="efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d" Nov 27 17:33:05 crc kubenswrapper[4792]: E1127 17:33:05.837488 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d\": container with ID starting with efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d not found: ID does not exist" containerID="efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.837502 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d"} err="failed to get container status \"efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d\": rpc error: code = NotFound desc = could not find container \"efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d\": container with ID starting with efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.837515 4792 scope.go:117] "RemoveContainer" containerID="e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.837786 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7"} err="failed to get container status \"e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7\": rpc error: code = NotFound desc = could not find container \"e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7\": container with ID starting with e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7 not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.837799 4792 scope.go:117] "RemoveContainer" containerID="db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.838019 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199"} err="failed to get container status \"db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199\": rpc error: code = NotFound desc = could not find container \"db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199\": container with ID starting with db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199 not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.838032 4792 scope.go:117] "RemoveContainer" containerID="965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.838254 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508"} err="failed to get container status \"965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508\": rpc error: code = NotFound desc = could not find container \"965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508\": container with ID starting with 965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508 not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.838268 4792 scope.go:117] "RemoveContainer" containerID="efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.838469 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d"} err="failed to get container status \"efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d\": rpc error: code = NotFound desc = could not find container \"efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d\": container with ID starting with efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.838481 4792 scope.go:117] "RemoveContainer" containerID="e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.838699 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7"} err="failed to get container status \"e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7\": rpc error: code = NotFound desc = could not find container \"e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7\": container with ID starting with e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7 not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.838717 4792 scope.go:117] "RemoveContainer" containerID="db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.838884 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199"} err="failed to get container status \"db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199\": rpc error: code = NotFound desc = could not find container \"db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199\": container with ID starting with db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199 not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.838898 4792 scope.go:117] "RemoveContainer" containerID="965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.839067 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508"} err="failed to get container status \"965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508\": rpc error: code = NotFound desc = could not find container \"965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508\": container with ID starting with 965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508 not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.839080 4792 scope.go:117] "RemoveContainer" containerID="efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.839246 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d"} err="failed to get container status \"efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d\": rpc error: code = NotFound desc = could not find container \"efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d\": container with ID starting with efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.839261 4792 scope.go:117] "RemoveContainer" containerID="e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.839441 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7"} err="failed to get container status \"e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7\": rpc error: code = NotFound desc = could not find container \"e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7\": container with ID starting with e48b9bbaef91821b70c757646af6e00676b2fcaa1a24f9852a1e3715508938e7 not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.839456 4792 scope.go:117] "RemoveContainer" containerID="db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.839683 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199"} err="failed to get container status \"db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199\": rpc error: code = NotFound desc = could not find container \"db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199\": container with ID starting with db2f0d330d9869a74288b88f8b864f15eec4f8925cade0564aa7e8176f59a199 not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.839697 4792 scope.go:117] "RemoveContainer" containerID="965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.839868 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508"} err="failed to get container status \"965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508\": rpc error: code = NotFound desc = could not find container \"965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508\": container with ID starting with 965c9427ba3c1845b005f5cf9880c2bfcc01fad1d0e7eb3c134355d89618b508 not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.839879 4792 scope.go:117] "RemoveContainer" containerID="efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.840046 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d"} err="failed to get container status \"efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d\": rpc error: code = NotFound desc = could not find container \"efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d\": container with ID starting with efbed4887cbadca984803818d339680f7a89ae0370e734b344a73ebc798ade8d not found: ID does not exist" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.942338 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.953853 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.964331 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:33:05 crc kubenswrapper[4792]: E1127 17:33:05.964956 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c065100-450b-4cf1-b831-86963871ed12" containerName="heat-cfnapi" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.964974 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c065100-450b-4cf1-b831-86963871ed12" containerName="heat-cfnapi" Nov 27 17:33:05 crc kubenswrapper[4792]: E1127 17:33:05.964994 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f38ffc0-9dc6-485a-835f-5d038444fa07" containerName="dnsmasq-dns" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.965002 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f38ffc0-9dc6-485a-835f-5d038444fa07" containerName="dnsmasq-dns" Nov 27 17:33:05 crc kubenswrapper[4792]: E1127 17:33:05.965017 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="ceilometer-central-agent" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.965023 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="ceilometer-central-agent" Nov 27 17:33:05 crc kubenswrapper[4792]: E1127 17:33:05.965044 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="sg-core" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.965049 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="sg-core" Nov 27 17:33:05 crc kubenswrapper[4792]: E1127 17:33:05.965060 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f38ffc0-9dc6-485a-835f-5d038444fa07" containerName="init" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.965066 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f38ffc0-9dc6-485a-835f-5d038444fa07" containerName="init" Nov 27 17:33:05 crc kubenswrapper[4792]: E1127 17:33:05.965078 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="ceilometer-notification-agent" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.965083 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="ceilometer-notification-agent" Nov 27 17:33:05 crc kubenswrapper[4792]: E1127 17:33:05.965093 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71d08fed-4336-49f2-aae6-b3d3fff2a2e7" containerName="heat-api" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.965099 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="71d08fed-4336-49f2-aae6-b3d3fff2a2e7" containerName="heat-api" Nov 27 17:33:05 crc kubenswrapper[4792]: E1127 17:33:05.965110 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="proxy-httpd" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.965116 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="proxy-httpd" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.965333 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="sg-core" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.965354 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f38ffc0-9dc6-485a-835f-5d038444fa07" containerName="dnsmasq-dns" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.965366 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="ceilometer-notification-agent" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.965383 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c065100-450b-4cf1-b831-86963871ed12" containerName="heat-cfnapi" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.965390 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="proxy-httpd" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.965399 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="71d08fed-4336-49f2-aae6-b3d3fff2a2e7" containerName="heat-api" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.965412 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="da906526-f35f-4512-991b-4aa75a7cd514" containerName="ceilometer-central-agent" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.967701 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.969815 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.970014 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 27 17:33:05 crc kubenswrapper[4792]: I1127 17:33:05.977113 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.133734 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.133796 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-run-httpd\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.133891 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-scripts\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.133929 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-log-httpd\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.133974 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkz2d\" (UniqueName: \"kubernetes.io/projected/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-kube-api-access-rkz2d\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.134016 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.134322 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-config-data\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.236376 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-config-data\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.236823 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.236938 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-run-httpd\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.237029 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-scripts\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.237133 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-log-httpd\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.237261 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkz2d\" (UniqueName: \"kubernetes.io/projected/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-kube-api-access-rkz2d\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.237401 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.238914 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-log-httpd\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.239436 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-run-httpd\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.246502 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-config-data\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.247086 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.256509 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-scripts\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.261764 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.264133 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkz2d\" (UniqueName: \"kubernetes.io/projected/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-kube-api-access-rkz2d\") pod \"ceilometer-0\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.292563 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.637955 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.638276 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.639115 4792 scope.go:117] "RemoveContainer" containerID="f1b477ba4fcfebc6193fd0c2eefeeb6ccc530817fbf3b916a7846ce3628f33f5" Nov 27 17:33:06 crc kubenswrapper[4792]: E1127 17:33:06.639404 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-cdc8758cd-7x94h_openstack(25b34285-7729-45d3-981e-8a0c47edb784)\"" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" podUID="25b34285-7729-45d3-981e-8a0c47edb784" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.654580 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.655798 4792 scope.go:117] "RemoveContainer" containerID="09ab4f0b2ef6bfeab28d638822a04703e9f277e60d51c0e54210d2202f80348e" Nov 27 17:33:06 crc kubenswrapper[4792]: E1127 17:33:06.656466 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-584bf79d4f-9jdkj_openstack(bd0d0ef0-a6ab-40d7-9585-782096604d70)\"" pod="openstack/heat-api-584bf79d4f-9jdkj" podUID="bd0d0ef0-a6ab-40d7-9585-782096604d70" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.719602 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da906526-f35f-4512-991b-4aa75a7cd514" path="/var/lib/kubelet/pods/da906526-f35f-4512-991b-4aa75a7cd514/volumes" Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.761727 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:33:06 crc kubenswrapper[4792]: I1127 17:33:06.816246 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:33:06 crc kubenswrapper[4792]: W1127 17:33:06.822190 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2bd29c6_0b83_4b93_8c18_e4733154d3d9.slice/crio-bb81203d9e8943a9d6656aeff80a48aef4d4807e00ac5c11c514ff7bb070d064 WatchSource:0}: Error finding container bb81203d9e8943a9d6656aeff80a48aef4d4807e00ac5c11c514ff7bb070d064: Status 404 returned error can't find the container with id bb81203d9e8943a9d6656aeff80a48aef4d4807e00ac5c11c514ff7bb070d064 Nov 27 17:33:07 crc kubenswrapper[4792]: I1127 17:33:07.632507 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2bd29c6-0b83-4b93-8c18-e4733154d3d9","Type":"ContainerStarted","Data":"bb81203d9e8943a9d6656aeff80a48aef4d4807e00ac5c11c514ff7bb070d064"} Nov 27 17:33:08 crc kubenswrapper[4792]: I1127 17:33:08.660474 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2bd29c6-0b83-4b93-8c18-e4733154d3d9","Type":"ContainerStarted","Data":"b1bc9c74ca2608bfd1db4fd2c3f7fc871f3c551969b6a729b285402be28327cf"} Nov 27 17:33:09 crc kubenswrapper[4792]: I1127 17:33:09.077796 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-c5fd6ddbf-xxrj5" podUID="5c065100-450b-4cf1-b831-86963871ed12" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.219:8000/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 27 17:33:09 crc kubenswrapper[4792]: I1127 17:33:09.217374 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-597ccfb68f-zfwfs" podUID="71d08fed-4336-49f2-aae6-b3d3fff2a2e7" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.220:8004/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 27 17:33:09 crc kubenswrapper[4792]: I1127 17:33:09.674688 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2bd29c6-0b83-4b93-8c18-e4733154d3d9","Type":"ContainerStarted","Data":"8bbe36b40552e7778425e5f7b91447fc7d75db09f335d02b3f49e321a57dade6"} Nov 27 17:33:10 crc kubenswrapper[4792]: I1127 17:33:10.091701 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:33:10 crc kubenswrapper[4792]: I1127 17:33:10.139681 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:33:10 crc kubenswrapper[4792]: I1127 17:33:10.155211 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-cdc8758cd-7x94h"] Nov 27 17:33:10 crc kubenswrapper[4792]: I1127 17:33:10.239453 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-584bf79d4f-9jdkj"] Nov 27 17:33:10 crc kubenswrapper[4792]: I1127 17:33:10.713506 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2bd29c6-0b83-4b93-8c18-e4733154d3d9","Type":"ContainerStarted","Data":"1a22b4a8b45d7f750953ad80953b3429e0443f62ddbf2641877e993b32e13ddf"} Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.016965 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.022919 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.057961 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-combined-ca-bundle\") pod \"25b34285-7729-45d3-981e-8a0c47edb784\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.058043 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-config-data-custom\") pod \"25b34285-7729-45d3-981e-8a0c47edb784\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.058147 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-config-data\") pod \"bd0d0ef0-a6ab-40d7-9585-782096604d70\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.058214 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4lpb\" (UniqueName: \"kubernetes.io/projected/25b34285-7729-45d3-981e-8a0c47edb784-kube-api-access-b4lpb\") pod \"25b34285-7729-45d3-981e-8a0c47edb784\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.058248 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-config-data\") pod \"25b34285-7729-45d3-981e-8a0c47edb784\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.058267 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-combined-ca-bundle\") pod \"bd0d0ef0-a6ab-40d7-9585-782096604d70\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.058306 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbsdg\" (UniqueName: \"kubernetes.io/projected/bd0d0ef0-a6ab-40d7-9585-782096604d70-kube-api-access-vbsdg\") pod \"bd0d0ef0-a6ab-40d7-9585-782096604d70\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.058332 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-config-data-custom\") pod \"bd0d0ef0-a6ab-40d7-9585-782096604d70\" (UID: \"bd0d0ef0-a6ab-40d7-9585-782096604d70\") " Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.064761 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bd0d0ef0-a6ab-40d7-9585-782096604d70" (UID: "bd0d0ef0-a6ab-40d7-9585-782096604d70"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.074003 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd0d0ef0-a6ab-40d7-9585-782096604d70-kube-api-access-vbsdg" (OuterVolumeSpecName: "kube-api-access-vbsdg") pod "bd0d0ef0-a6ab-40d7-9585-782096604d70" (UID: "bd0d0ef0-a6ab-40d7-9585-782096604d70"). InnerVolumeSpecName "kube-api-access-vbsdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.074116 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "25b34285-7729-45d3-981e-8a0c47edb784" (UID: "25b34285-7729-45d3-981e-8a0c47edb784"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.097882 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25b34285-7729-45d3-981e-8a0c47edb784-kube-api-access-b4lpb" (OuterVolumeSpecName: "kube-api-access-b4lpb") pod "25b34285-7729-45d3-981e-8a0c47edb784" (UID: "25b34285-7729-45d3-981e-8a0c47edb784"). InnerVolumeSpecName "kube-api-access-b4lpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.160245 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25b34285-7729-45d3-981e-8a0c47edb784" (UID: "25b34285-7729-45d3-981e-8a0c47edb784"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.160679 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-combined-ca-bundle\") pod \"25b34285-7729-45d3-981e-8a0c47edb784\" (UID: \"25b34285-7729-45d3-981e-8a0c47edb784\") " Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.161441 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.161465 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4lpb\" (UniqueName: \"kubernetes.io/projected/25b34285-7729-45d3-981e-8a0c47edb784-kube-api-access-b4lpb\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.161480 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbsdg\" (UniqueName: \"kubernetes.io/projected/bd0d0ef0-a6ab-40d7-9585-782096604d70-kube-api-access-vbsdg\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.161491 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:11 crc kubenswrapper[4792]: W1127 17:33:11.161550 4792 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/25b34285-7729-45d3-981e-8a0c47edb784/volumes/kubernetes.io~secret/combined-ca-bundle Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.161584 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25b34285-7729-45d3-981e-8a0c47edb784" (UID: "25b34285-7729-45d3-981e-8a0c47edb784"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.175800 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd0d0ef0-a6ab-40d7-9585-782096604d70" (UID: "bd0d0ef0-a6ab-40d7-9585-782096604d70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.189981 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-config-data" (OuterVolumeSpecName: "config-data") pod "25b34285-7729-45d3-981e-8a0c47edb784" (UID: "25b34285-7729-45d3-981e-8a0c47edb784"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.206186 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-config-data" (OuterVolumeSpecName: "config-data") pod "bd0d0ef0-a6ab-40d7-9585-782096604d70" (UID: "bd0d0ef0-a6ab-40d7-9585-782096604d70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.264117 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.264153 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.264166 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25b34285-7729-45d3-981e-8a0c47edb784-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.264178 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd0d0ef0-a6ab-40d7-9585-782096604d70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.638791 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.686478 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-85bccd9657-54g5w"] Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.686748 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-85bccd9657-54g5w" podUID="48dc6ea7-886f-4f25-a954-635e730e6b81" containerName="heat-engine" containerID="cri-o://397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3" gracePeriod=60 Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.713363 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" event={"ID":"25b34285-7729-45d3-981e-8a0c47edb784","Type":"ContainerDied","Data":"50e6933bdc165bd23781fb59b6c3640df74452e688a0c3df511e9082c2c7d64b"} Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.713379 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cdc8758cd-7x94h" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.713418 4792 scope.go:117] "RemoveContainer" containerID="f1b477ba4fcfebc6193fd0c2eefeeb6ccc530817fbf3b916a7846ce3628f33f5" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.715386 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-584bf79d4f-9jdkj" event={"ID":"bd0d0ef0-a6ab-40d7-9585-782096604d70","Type":"ContainerDied","Data":"f2ad7ce2d49d319f082c87af3d851731ac793ece8cb6aa20f3e2058555e5d1f6"} Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.715412 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-584bf79d4f-9jdkj" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.722350 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2bd29c6-0b83-4b93-8c18-e4733154d3d9","Type":"ContainerStarted","Data":"44348f00d6e93bda70de7e54fc92788369d8ce58da00288fb2bef8661b00088d"} Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.722519 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="ceilometer-central-agent" containerID="cri-o://b1bc9c74ca2608bfd1db4fd2c3f7fc871f3c551969b6a729b285402be28327cf" gracePeriod=30 Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.722770 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.722810 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="proxy-httpd" containerID="cri-o://44348f00d6e93bda70de7e54fc92788369d8ce58da00288fb2bef8661b00088d" gracePeriod=30 Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.722858 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="sg-core" containerID="cri-o://1a22b4a8b45d7f750953ad80953b3429e0443f62ddbf2641877e993b32e13ddf" gracePeriod=30 Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.722888 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="ceilometer-notification-agent" containerID="cri-o://8bbe36b40552e7778425e5f7b91447fc7d75db09f335d02b3f49e321a57dade6" gracePeriod=30 Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.748862 4792 scope.go:117] "RemoveContainer" containerID="09ab4f0b2ef6bfeab28d638822a04703e9f277e60d51c0e54210d2202f80348e" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.754606 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.093334897 podStartE2EDuration="6.754585176s" podCreationTimestamp="2025-11-27 17:33:05 +0000 UTC" firstStartedPulling="2025-11-27 17:33:06.824943559 +0000 UTC m=+1409.167769877" lastFinishedPulling="2025-11-27 17:33:11.486193838 +0000 UTC m=+1413.829020156" observedRunningTime="2025-11-27 17:33:11.751726715 +0000 UTC m=+1414.094553033" watchObservedRunningTime="2025-11-27 17:33:11.754585176 +0000 UTC m=+1414.097411494" Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.794382 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-cdc8758cd-7x94h"] Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.819839 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-cdc8758cd-7x94h"] Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.831106 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-584bf79d4f-9jdkj"] Nov 27 17:33:11 crc kubenswrapper[4792]: I1127 17:33:11.850497 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-584bf79d4f-9jdkj"] Nov 27 17:33:12 crc kubenswrapper[4792]: I1127 17:33:12.698394 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25b34285-7729-45d3-981e-8a0c47edb784" path="/var/lib/kubelet/pods/25b34285-7729-45d3-981e-8a0c47edb784/volumes" Nov 27 17:33:12 crc kubenswrapper[4792]: I1127 17:33:12.699220 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd0d0ef0-a6ab-40d7-9585-782096604d70" path="/var/lib/kubelet/pods/bd0d0ef0-a6ab-40d7-9585-782096604d70/volumes" Nov 27 17:33:12 crc kubenswrapper[4792]: I1127 17:33:12.743958 4792 generic.go:334] "Generic (PLEG): container finished" podID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerID="1a22b4a8b45d7f750953ad80953b3429e0443f62ddbf2641877e993b32e13ddf" exitCode=2 Nov 27 17:33:12 crc kubenswrapper[4792]: I1127 17:33:12.743994 4792 generic.go:334] "Generic (PLEG): container finished" podID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerID="8bbe36b40552e7778425e5f7b91447fc7d75db09f335d02b3f49e321a57dade6" exitCode=0 Nov 27 17:33:12 crc kubenswrapper[4792]: I1127 17:33:12.744001 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2bd29c6-0b83-4b93-8c18-e4733154d3d9","Type":"ContainerDied","Data":"1a22b4a8b45d7f750953ad80953b3429e0443f62ddbf2641877e993b32e13ddf"} Nov 27 17:33:12 crc kubenswrapper[4792]: I1127 17:33:12.744055 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2bd29c6-0b83-4b93-8c18-e4733154d3d9","Type":"ContainerDied","Data":"8bbe36b40552e7778425e5f7b91447fc7d75db09f335d02b3f49e321a57dade6"} Nov 27 17:33:13 crc kubenswrapper[4792]: E1127 17:33:13.856764 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 27 17:33:13 crc kubenswrapper[4792]: E1127 17:33:13.861040 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 27 17:33:13 crc kubenswrapper[4792]: E1127 17:33:13.862365 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 27 17:33:13 crc kubenswrapper[4792]: E1127 17:33:13.862429 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-85bccd9657-54g5w" podUID="48dc6ea7-886f-4f25-a954-635e730e6b81" containerName="heat-engine" Nov 27 17:33:14 crc kubenswrapper[4792]: I1127 17:33:14.769061 4792 generic.go:334] "Generic (PLEG): container finished" podID="6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9" containerID="d150a6bbbbf4de430746d37d3faeabd0a9dd0c6e7745b9a8e5786f1e0b786204" exitCode=0 Nov 27 17:33:14 crc kubenswrapper[4792]: I1127 17:33:14.769124 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jxhlm" event={"ID":"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9","Type":"ContainerDied","Data":"d150a6bbbbf4de430746d37d3faeabd0a9dd0c6e7745b9a8e5786f1e0b786204"} Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.405827 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.603124 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkrzw\" (UniqueName: \"kubernetes.io/projected/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-kube-api-access-mkrzw\") pod \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.603373 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-scripts\") pod \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.603424 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-config-data\") pod \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.603494 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-combined-ca-bundle\") pod \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\" (UID: \"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9\") " Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.611360 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-scripts" (OuterVolumeSpecName: "scripts") pod "6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9" (UID: "6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.611677 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-kube-api-access-mkrzw" (OuterVolumeSpecName: "kube-api-access-mkrzw") pod "6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9" (UID: "6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9"). InnerVolumeSpecName "kube-api-access-mkrzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.648088 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-config-data" (OuterVolumeSpecName: "config-data") pod "6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9" (UID: "6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.655219 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9" (UID: "6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.706134 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.706164 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkrzw\" (UniqueName: \"kubernetes.io/projected/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-kube-api-access-mkrzw\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.706181 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.706192 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.792155 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jxhlm" event={"ID":"6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9","Type":"ContainerDied","Data":"dd2f0002a8ad409fb3049207d7990896c31121ba2045d744ac99ffce037330a5"} Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.792200 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd2f0002a8ad409fb3049207d7990896c31121ba2045d744ac99ffce037330a5" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.792208 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jxhlm" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.902209 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 27 17:33:16 crc kubenswrapper[4792]: E1127 17:33:16.902769 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd0d0ef0-a6ab-40d7-9585-782096604d70" containerName="heat-api" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.902786 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0d0ef0-a6ab-40d7-9585-782096604d70" containerName="heat-api" Nov 27 17:33:16 crc kubenswrapper[4792]: E1127 17:33:16.902799 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9" containerName="nova-cell0-conductor-db-sync" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.902805 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9" containerName="nova-cell0-conductor-db-sync" Nov 27 17:33:16 crc kubenswrapper[4792]: E1127 17:33:16.902828 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b34285-7729-45d3-981e-8a0c47edb784" containerName="heat-cfnapi" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.902836 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b34285-7729-45d3-981e-8a0c47edb784" containerName="heat-cfnapi" Nov 27 17:33:16 crc kubenswrapper[4792]: E1127 17:33:16.902846 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd0d0ef0-a6ab-40d7-9585-782096604d70" containerName="heat-api" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.902851 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0d0ef0-a6ab-40d7-9585-782096604d70" containerName="heat-api" Nov 27 17:33:16 crc kubenswrapper[4792]: E1127 17:33:16.902861 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b34285-7729-45d3-981e-8a0c47edb784" containerName="heat-cfnapi" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.902866 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b34285-7729-45d3-981e-8a0c47edb784" containerName="heat-cfnapi" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.903074 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="25b34285-7729-45d3-981e-8a0c47edb784" containerName="heat-cfnapi" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.903088 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="25b34285-7729-45d3-981e-8a0c47edb784" containerName="heat-cfnapi" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.903103 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd0d0ef0-a6ab-40d7-9585-782096604d70" containerName="heat-api" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.903118 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9" containerName="nova-cell0-conductor-db-sync" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.903132 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd0d0ef0-a6ab-40d7-9585-782096604d70" containerName="heat-api" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.903915 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.905626 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-n6z4s" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.906338 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 27 17:33:16 crc kubenswrapper[4792]: I1127 17:33:16.916928 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 27 17:33:17 crc kubenswrapper[4792]: I1127 17:33:17.011244 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhhbh\" (UniqueName: \"kubernetes.io/projected/e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9-kube-api-access-mhhbh\") pod \"nova-cell0-conductor-0\" (UID: \"e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9\") " pod="openstack/nova-cell0-conductor-0" Nov 27 17:33:17 crc kubenswrapper[4792]: I1127 17:33:17.011346 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9\") " pod="openstack/nova-cell0-conductor-0" Nov 27 17:33:17 crc kubenswrapper[4792]: I1127 17:33:17.011503 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9\") " pod="openstack/nova-cell0-conductor-0" Nov 27 17:33:17 crc kubenswrapper[4792]: I1127 17:33:17.113208 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9\") " pod="openstack/nova-cell0-conductor-0" Nov 27 17:33:17 crc kubenswrapper[4792]: I1127 17:33:17.113344 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhhbh\" (UniqueName: \"kubernetes.io/projected/e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9-kube-api-access-mhhbh\") pod \"nova-cell0-conductor-0\" (UID: \"e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9\") " pod="openstack/nova-cell0-conductor-0" Nov 27 17:33:17 crc kubenswrapper[4792]: I1127 17:33:17.113437 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9\") " pod="openstack/nova-cell0-conductor-0" Nov 27 17:33:17 crc kubenswrapper[4792]: I1127 17:33:17.117157 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9\") " pod="openstack/nova-cell0-conductor-0" Nov 27 17:33:17 crc kubenswrapper[4792]: I1127 17:33:17.119123 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9\") " pod="openstack/nova-cell0-conductor-0" Nov 27 17:33:17 crc kubenswrapper[4792]: I1127 17:33:17.151213 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhhbh\" (UniqueName: \"kubernetes.io/projected/e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9-kube-api-access-mhhbh\") pod \"nova-cell0-conductor-0\" (UID: \"e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9\") " pod="openstack/nova-cell0-conductor-0" Nov 27 17:33:17 crc kubenswrapper[4792]: I1127 17:33:17.230786 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 27 17:33:18 crc kubenswrapper[4792]: I1127 17:33:17.692131 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 27 17:33:18 crc kubenswrapper[4792]: W1127 17:33:17.693010 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode07ae62e_2f3c_4122_aba4_fbe7aaf16ff9.slice/crio-f132b22c8ce2fc41ab695238741473e2cb832336dd4d3e1e9d831dfe27bffdae WatchSource:0}: Error finding container f132b22c8ce2fc41ab695238741473e2cb832336dd4d3e1e9d831dfe27bffdae: Status 404 returned error can't find the container with id f132b22c8ce2fc41ab695238741473e2cb832336dd4d3e1e9d831dfe27bffdae Nov 27 17:33:18 crc kubenswrapper[4792]: I1127 17:33:17.802498 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9","Type":"ContainerStarted","Data":"f132b22c8ce2fc41ab695238741473e2cb832336dd4d3e1e9d831dfe27bffdae"} Nov 27 17:33:18 crc kubenswrapper[4792]: I1127 17:33:18.821796 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9","Type":"ContainerStarted","Data":"eaec2baa7e07678a86c83f8981ab57b031a6435b377e7aad3cdbd9fbb7b49ae3"} Nov 27 17:33:18 crc kubenswrapper[4792]: I1127 17:33:18.822368 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 27 17:33:18 crc kubenswrapper[4792]: I1127 17:33:18.854372 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.854343883 podStartE2EDuration="2.854343883s" podCreationTimestamp="2025-11-27 17:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:18.848500698 +0000 UTC m=+1421.191327016" watchObservedRunningTime="2025-11-27 17:33:18.854343883 +0000 UTC m=+1421.197170201" Nov 27 17:33:23 crc kubenswrapper[4792]: E1127 17:33:23.856834 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 27 17:33:23 crc kubenswrapper[4792]: E1127 17:33:23.859842 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 27 17:33:23 crc kubenswrapper[4792]: E1127 17:33:23.865979 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 27 17:33:23 crc kubenswrapper[4792]: E1127 17:33:23.866167 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-85bccd9657-54g5w" podUID="48dc6ea7-886f-4f25-a954-635e730e6b81" containerName="heat-engine" Nov 27 17:33:23 crc kubenswrapper[4792]: I1127 17:33:23.879530 4792 generic.go:334] "Generic (PLEG): container finished" podID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerID="b1bc9c74ca2608bfd1db4fd2c3f7fc871f3c551969b6a729b285402be28327cf" exitCode=0 Nov 27 17:33:23 crc kubenswrapper[4792]: I1127 17:33:23.879593 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2bd29c6-0b83-4b93-8c18-e4733154d3d9","Type":"ContainerDied","Data":"b1bc9c74ca2608bfd1db4fd2c3f7fc871f3c551969b6a729b285402be28327cf"} Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.776429 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.881730 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-config-data\") pod \"48dc6ea7-886f-4f25-a954-635e730e6b81\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.881783 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-combined-ca-bundle\") pod \"48dc6ea7-886f-4f25-a954-635e730e6b81\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.881835 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz64p\" (UniqueName: \"kubernetes.io/projected/48dc6ea7-886f-4f25-a954-635e730e6b81-kube-api-access-zz64p\") pod \"48dc6ea7-886f-4f25-a954-635e730e6b81\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.881963 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-config-data-custom\") pod \"48dc6ea7-886f-4f25-a954-635e730e6b81\" (UID: \"48dc6ea7-886f-4f25-a954-635e730e6b81\") " Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.887322 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "48dc6ea7-886f-4f25-a954-635e730e6b81" (UID: "48dc6ea7-886f-4f25-a954-635e730e6b81"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.888107 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48dc6ea7-886f-4f25-a954-635e730e6b81-kube-api-access-zz64p" (OuterVolumeSpecName: "kube-api-access-zz64p") pod "48dc6ea7-886f-4f25-a954-635e730e6b81" (UID: "48dc6ea7-886f-4f25-a954-635e730e6b81"). InnerVolumeSpecName "kube-api-access-zz64p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.918085 4792 generic.go:334] "Generic (PLEG): container finished" podID="48dc6ea7-886f-4f25-a954-635e730e6b81" containerID="397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3" exitCode=0 Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.918149 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-85bccd9657-54g5w" event={"ID":"48dc6ea7-886f-4f25-a954-635e730e6b81","Type":"ContainerDied","Data":"397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3"} Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.918197 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-85bccd9657-54g5w" event={"ID":"48dc6ea7-886f-4f25-a954-635e730e6b81","Type":"ContainerDied","Data":"d0df677956216d53d9618d59e142196e6d5b76e0ab3f40c9fe3a76f33b76c924"} Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.918333 4792 scope.go:117] "RemoveContainer" containerID="397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3" Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.918516 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-85bccd9657-54g5w" Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.927775 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48dc6ea7-886f-4f25-a954-635e730e6b81" (UID: "48dc6ea7-886f-4f25-a954-635e730e6b81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.954982 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-config-data" (OuterVolumeSpecName: "config-data") pod "48dc6ea7-886f-4f25-a954-635e730e6b81" (UID: "48dc6ea7-886f-4f25-a954-635e730e6b81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.985835 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.986007 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.986062 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz64p\" (UniqueName: \"kubernetes.io/projected/48dc6ea7-886f-4f25-a954-635e730e6b81-kube-api-access-zz64p\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:26 crc kubenswrapper[4792]: I1127 17:33:26.986136 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48dc6ea7-886f-4f25-a954-635e730e6b81-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:27 crc kubenswrapper[4792]: I1127 17:33:27.006483 4792 scope.go:117] "RemoveContainer" containerID="397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3" Nov 27 17:33:27 crc kubenswrapper[4792]: E1127 17:33:27.007229 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3\": container with ID starting with 397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3 not found: ID does not exist" containerID="397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3" Nov 27 17:33:27 crc kubenswrapper[4792]: I1127 17:33:27.007301 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3"} err="failed to get container status \"397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3\": rpc error: code = NotFound desc = could not find container \"397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3\": container with ID starting with 397ea02db49df70984e1e74ff6c5d79aa982fe4801a051fdc8613267445211b3 not found: ID does not exist" Nov 27 17:33:27 crc kubenswrapper[4792]: I1127 17:33:27.260699 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-85bccd9657-54g5w"] Nov 27 17:33:27 crc kubenswrapper[4792]: I1127 17:33:27.271045 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-85bccd9657-54g5w"] Nov 27 17:33:27 crc kubenswrapper[4792]: I1127 17:33:27.271264 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 27 17:33:27 crc kubenswrapper[4792]: I1127 17:33:27.870564 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-h9ssd"] Nov 27 17:33:27 crc kubenswrapper[4792]: E1127 17:33:27.871137 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48dc6ea7-886f-4f25-a954-635e730e6b81" containerName="heat-engine" Nov 27 17:33:27 crc kubenswrapper[4792]: I1127 17:33:27.871157 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="48dc6ea7-886f-4f25-a954-635e730e6b81" containerName="heat-engine" Nov 27 17:33:27 crc kubenswrapper[4792]: I1127 17:33:27.871486 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="48dc6ea7-886f-4f25-a954-635e730e6b81" containerName="heat-engine" Nov 27 17:33:27 crc kubenswrapper[4792]: I1127 17:33:27.872445 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:27 crc kubenswrapper[4792]: I1127 17:33:27.877282 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 27 17:33:27 crc kubenswrapper[4792]: I1127 17:33:27.877541 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 27 17:33:27 crc kubenswrapper[4792]: I1127 17:33:27.890511 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-h9ssd"] Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.008174 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h9ssd\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.008520 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-scripts\") pod \"nova-cell0-cell-mapping-h9ssd\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.008572 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-config-data\") pod \"nova-cell0-cell-mapping-h9ssd\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.008614 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ckjx\" (UniqueName: \"kubernetes.io/projected/8333cb7e-8739-4af6-a1eb-775aa791fb82-kube-api-access-7ckjx\") pod \"nova-cell0-cell-mapping-h9ssd\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.046950 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.048571 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.056223 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.058066 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.113896 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h9ssd\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.113944 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-scripts\") pod \"nova-cell0-cell-mapping-h9ssd\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.113986 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-config-data\") pod \"nova-cell0-cell-mapping-h9ssd\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.114028 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ckjx\" (UniqueName: \"kubernetes.io/projected/8333cb7e-8739-4af6-a1eb-775aa791fb82-kube-api-access-7ckjx\") pod \"nova-cell0-cell-mapping-h9ssd\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.124629 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h9ssd\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.125161 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-scripts\") pod \"nova-cell0-cell-mapping-h9ssd\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.143134 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ckjx\" (UniqueName: \"kubernetes.io/projected/8333cb7e-8739-4af6-a1eb-775aa791fb82-kube-api-access-7ckjx\") pod \"nova-cell0-cell-mapping-h9ssd\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.163906 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-config-data\") pod \"nova-cell0-cell-mapping-h9ssd\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.177013 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.178953 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.203725 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.204393 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.217260 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22b6e64-ba33-4da4-8c30-e202bace8c2e-config-data\") pod \"nova-scheduler-0\" (UID: \"c22b6e64-ba33-4da4-8c30-e202bace8c2e\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.217373 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22b6e64-ba33-4da4-8c30-e202bace8c2e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c22b6e64-ba33-4da4-8c30-e202bace8c2e\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.217433 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqvpw\" (UniqueName: \"kubernetes.io/projected/c22b6e64-ba33-4da4-8c30-e202bace8c2e-kube-api-access-jqvpw\") pod \"nova-scheduler-0\" (UID: \"c22b6e64-ba33-4da4-8c30-e202bace8c2e\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.233459 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.235402 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.241999 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.264449 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.319320 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e40a19-798c-4f99-9e2b-ad5232de38a8-config-data\") pod \"nova-metadata-0\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " pod="openstack/nova-metadata-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.319416 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e40a19-798c-4f99-9e2b-ad5232de38a8-logs\") pod \"nova-metadata-0\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " pod="openstack/nova-metadata-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.319453 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e40a19-798c-4f99-9e2b-ad5232de38a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " pod="openstack/nova-metadata-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.319495 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22b6e64-ba33-4da4-8c30-e202bace8c2e-config-data\") pod \"nova-scheduler-0\" (UID: \"c22b6e64-ba33-4da4-8c30-e202bace8c2e\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.319562 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tjgn\" (UniqueName: \"kubernetes.io/projected/70e40a19-798c-4f99-9e2b-ad5232de38a8-kube-api-access-2tjgn\") pod \"nova-metadata-0\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " pod="openstack/nova-metadata-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.319602 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22b6e64-ba33-4da4-8c30-e202bace8c2e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c22b6e64-ba33-4da4-8c30-e202bace8c2e\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.319665 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqvpw\" (UniqueName: \"kubernetes.io/projected/c22b6e64-ba33-4da4-8c30-e202bace8c2e-kube-api-access-jqvpw\") pod \"nova-scheduler-0\" (UID: \"c22b6e64-ba33-4da4-8c30-e202bace8c2e\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.325156 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.336656 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22b6e64-ba33-4da4-8c30-e202bace8c2e-config-data\") pod \"nova-scheduler-0\" (UID: \"c22b6e64-ba33-4da4-8c30-e202bace8c2e\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.353717 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22b6e64-ba33-4da4-8c30-e202bace8c2e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c22b6e64-ba33-4da4-8c30-e202bace8c2e\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.382601 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqvpw\" (UniqueName: \"kubernetes.io/projected/c22b6e64-ba33-4da4-8c30-e202bace8c2e-kube-api-access-jqvpw\") pod \"nova-scheduler-0\" (UID: \"c22b6e64-ba33-4da4-8c30-e202bace8c2e\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.383690 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.394365 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.396397 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.422426 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.423809 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e40a19-798c-4f99-9e2b-ad5232de38a8-logs\") pod \"nova-metadata-0\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " pod="openstack/nova-metadata-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.423948 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e40a19-798c-4f99-9e2b-ad5232de38a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " pod="openstack/nova-metadata-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.424046 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df22a083-92d1-4897-967b-1f2f27c8b0e8-logs\") pod \"nova-api-0\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " pod="openstack/nova-api-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.424119 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df22a083-92d1-4897-967b-1f2f27c8b0e8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " pod="openstack/nova-api-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.424207 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tjgn\" (UniqueName: \"kubernetes.io/projected/70e40a19-798c-4f99-9e2b-ad5232de38a8-kube-api-access-2tjgn\") pod \"nova-metadata-0\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " pod="openstack/nova-metadata-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.424309 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df22a083-92d1-4897-967b-1f2f27c8b0e8-config-data\") pod \"nova-api-0\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " pod="openstack/nova-api-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.424421 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmkrs\" (UniqueName: \"kubernetes.io/projected/df22a083-92d1-4897-967b-1f2f27c8b0e8-kube-api-access-pmkrs\") pod \"nova-api-0\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " pod="openstack/nova-api-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.424497 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e40a19-798c-4f99-9e2b-ad5232de38a8-config-data\") pod \"nova-metadata-0\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " pod="openstack/nova-metadata-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.429239 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e40a19-798c-4f99-9e2b-ad5232de38a8-logs\") pod \"nova-metadata-0\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " pod="openstack/nova-metadata-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.433270 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e40a19-798c-4f99-9e2b-ad5232de38a8-config-data\") pod \"nova-metadata-0\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " pod="openstack/nova-metadata-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.443352 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e40a19-798c-4f99-9e2b-ad5232de38a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " pod="openstack/nova-metadata-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.484307 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.490832 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tjgn\" (UniqueName: \"kubernetes.io/projected/70e40a19-798c-4f99-9e2b-ad5232de38a8-kube-api-access-2tjgn\") pod \"nova-metadata-0\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " pod="openstack/nova-metadata-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.513698 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7877d89589-xbg6r"] Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.515893 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.530716 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52234aa9-87fd-45a8-9c3c-914366e1bbbd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"52234aa9-87fd-45a8-9c3c-914366e1bbbd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.530847 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df22a083-92d1-4897-967b-1f2f27c8b0e8-config-data\") pod \"nova-api-0\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " pod="openstack/nova-api-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.530936 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmkrs\" (UniqueName: \"kubernetes.io/projected/df22a083-92d1-4897-967b-1f2f27c8b0e8-kube-api-access-pmkrs\") pod \"nova-api-0\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " pod="openstack/nova-api-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.531231 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mtvf\" (UniqueName: \"kubernetes.io/projected/52234aa9-87fd-45a8-9c3c-914366e1bbbd-kube-api-access-2mtvf\") pod \"nova-cell1-novncproxy-0\" (UID: \"52234aa9-87fd-45a8-9c3c-914366e1bbbd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.531383 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52234aa9-87fd-45a8-9c3c-914366e1bbbd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"52234aa9-87fd-45a8-9c3c-914366e1bbbd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.531568 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df22a083-92d1-4897-967b-1f2f27c8b0e8-logs\") pod \"nova-api-0\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " pod="openstack/nova-api-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.531652 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df22a083-92d1-4897-967b-1f2f27c8b0e8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " pod="openstack/nova-api-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.533410 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df22a083-92d1-4897-967b-1f2f27c8b0e8-logs\") pod \"nova-api-0\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " pod="openstack/nova-api-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.541961 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-xbg6r"] Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.542535 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df22a083-92d1-4897-967b-1f2f27c8b0e8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " pod="openstack/nova-api-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.546807 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df22a083-92d1-4897-967b-1f2f27c8b0e8-config-data\") pod \"nova-api-0\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " pod="openstack/nova-api-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.554840 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmkrs\" (UniqueName: \"kubernetes.io/projected/df22a083-92d1-4897-967b-1f2f27c8b0e8-kube-api-access-pmkrs\") pod \"nova-api-0\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " pod="openstack/nova-api-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.569039 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.593573 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.637440 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-dns-svc\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.637513 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52234aa9-87fd-45a8-9c3c-914366e1bbbd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"52234aa9-87fd-45a8-9c3c-914366e1bbbd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.637536 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.637622 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-config\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.637655 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.637696 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.637728 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xszmb\" (UniqueName: \"kubernetes.io/projected/a181cdfc-ad1c-438d-945d-8e77cabce7c9-kube-api-access-xszmb\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.637784 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52234aa9-87fd-45a8-9c3c-914366e1bbbd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"52234aa9-87fd-45a8-9c3c-914366e1bbbd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.637833 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mtvf\" (UniqueName: \"kubernetes.io/projected/52234aa9-87fd-45a8-9c3c-914366e1bbbd-kube-api-access-2mtvf\") pod \"nova-cell1-novncproxy-0\" (UID: \"52234aa9-87fd-45a8-9c3c-914366e1bbbd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.647130 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52234aa9-87fd-45a8-9c3c-914366e1bbbd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"52234aa9-87fd-45a8-9c3c-914366e1bbbd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.661361 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mtvf\" (UniqueName: \"kubernetes.io/projected/52234aa9-87fd-45a8-9c3c-914366e1bbbd-kube-api-access-2mtvf\") pod \"nova-cell1-novncproxy-0\" (UID: \"52234aa9-87fd-45a8-9c3c-914366e1bbbd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.677003 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52234aa9-87fd-45a8-9c3c-914366e1bbbd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"52234aa9-87fd-45a8-9c3c-914366e1bbbd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.711220 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48dc6ea7-886f-4f25-a954-635e730e6b81" path="/var/lib/kubelet/pods/48dc6ea7-886f-4f25-a954-635e730e6b81/volumes" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.739957 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-dns-svc\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.740249 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.740395 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-config\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.740470 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.740555 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.740627 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xszmb\" (UniqueName: \"kubernetes.io/projected/a181cdfc-ad1c-438d-945d-8e77cabce7c9-kube-api-access-xszmb\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.741766 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-dns-svc\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.741801 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.742465 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-config\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.743043 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.744333 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.759277 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xszmb\" (UniqueName: \"kubernetes.io/projected/a181cdfc-ad1c-438d-945d-8e77cabce7c9-kube-api-access-xszmb\") pod \"dnsmasq-dns-7877d89589-xbg6r\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.900249 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:33:28 crc kubenswrapper[4792]: I1127 17:33:28.919923 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.124448 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-h9ssd"] Nov 27 17:33:29 crc kubenswrapper[4792]: W1127 17:33:29.150833 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8333cb7e_8739_4af6_a1eb_775aa791fb82.slice/crio-c03126196e06df1d7c5d26a866ae4df2c4f87b9b2486f758c301a9e500787175 WatchSource:0}: Error finding container c03126196e06df1d7c5d26a866ae4df2c4f87b9b2486f758c301a9e500787175: Status 404 returned error can't find the container with id c03126196e06df1d7c5d26a866ae4df2c4f87b9b2486f758c301a9e500787175 Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.311077 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q6wwh"] Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.315223 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.318320 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.318336 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.335574 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q6wwh"] Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.348821 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 27 17:33:29 crc kubenswrapper[4792]: W1127 17:33:29.360763 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc22b6e64_ba33_4da4_8c30_e202bace8c2e.slice/crio-bc835934e007ae59f16bea473018f720d188414c7594505bc184db9d66061d59 WatchSource:0}: Error finding container bc835934e007ae59f16bea473018f720d188414c7594505bc184db9d66061d59: Status 404 returned error can't find the container with id bc835934e007ae59f16bea473018f720d188414c7594505bc184db9d66061d59 Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.485359 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-scripts\") pod \"nova-cell1-conductor-db-sync-q6wwh\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.485735 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q6wwh\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.485840 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlcjt\" (UniqueName: \"kubernetes.io/projected/19f9765b-6579-4017-9ee9-dcf8f7829b19-kube-api-access-wlcjt\") pod \"nova-cell1-conductor-db-sync-q6wwh\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.485936 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-config-data\") pod \"nova-cell1-conductor-db-sync-q6wwh\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.500099 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.604380 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q6wwh\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.607323 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlcjt\" (UniqueName: \"kubernetes.io/projected/19f9765b-6579-4017-9ee9-dcf8f7829b19-kube-api-access-wlcjt\") pod \"nova-cell1-conductor-db-sync-q6wwh\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.607513 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-config-data\") pod \"nova-cell1-conductor-db-sync-q6wwh\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.607756 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-scripts\") pod \"nova-cell1-conductor-db-sync-q6wwh\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.611353 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q6wwh\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.626066 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-scripts\") pod \"nova-cell1-conductor-db-sync-q6wwh\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.635037 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-config-data\") pod \"nova-cell1-conductor-db-sync-q6wwh\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.639581 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlcjt\" (UniqueName: \"kubernetes.io/projected/19f9765b-6579-4017-9ee9-dcf8f7829b19-kube-api-access-wlcjt\") pod \"nova-cell1-conductor-db-sync-q6wwh\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.646269 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.666563 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-xbg6r"] Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.723340 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.852362 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.968871 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h9ssd" event={"ID":"8333cb7e-8739-4af6-a1eb-775aa791fb82","Type":"ContainerStarted","Data":"87fe28e874f1e0d0ef67214423c214c2891dbf9306f72c0d2034c84ddd18d717"} Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.968915 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h9ssd" event={"ID":"8333cb7e-8739-4af6-a1eb-775aa791fb82","Type":"ContainerStarted","Data":"c03126196e06df1d7c5d26a866ae4df2c4f87b9b2486f758c301a9e500787175"} Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.977904 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"52234aa9-87fd-45a8-9c3c-914366e1bbbd","Type":"ContainerStarted","Data":"6aef16297ce737efa19b4234a7fa2129a5805fc683362ab51a65eb150ea43987"} Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.979334 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df22a083-92d1-4897-967b-1f2f27c8b0e8","Type":"ContainerStarted","Data":"588f9a2e535e39062c24e181abbf37b146e477161a951c27aa862b43b402a940"} Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.980155 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70e40a19-798c-4f99-9e2b-ad5232de38a8","Type":"ContainerStarted","Data":"4418f43b6c26a6c225887d4a3728b066edfacb56d1f718642c4b7f0cc0a48a6e"} Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.980952 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c22b6e64-ba33-4da4-8c30-e202bace8c2e","Type":"ContainerStarted","Data":"bc835934e007ae59f16bea473018f720d188414c7594505bc184db9d66061d59"} Nov 27 17:33:29 crc kubenswrapper[4792]: I1127 17:33:29.991733 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-xbg6r" event={"ID":"a181cdfc-ad1c-438d-945d-8e77cabce7c9","Type":"ContainerStarted","Data":"2ed1630c9ec68ea7e4ecd2a8d4d375d686859b4f6e7ef74d4a0333945f558355"} Nov 27 17:33:30 crc kubenswrapper[4792]: I1127 17:33:30.012987 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-h9ssd" podStartSLOduration=3.012964714 podStartE2EDuration="3.012964714s" podCreationTimestamp="2025-11-27 17:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:29.989383997 +0000 UTC m=+1432.332210315" watchObservedRunningTime="2025-11-27 17:33:30.012964714 +0000 UTC m=+1432.355791032" Nov 27 17:33:30 crc kubenswrapper[4792]: I1127 17:33:30.458049 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q6wwh"] Nov 27 17:33:30 crc kubenswrapper[4792]: W1127 17:33:30.477146 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19f9765b_6579_4017_9ee9_dcf8f7829b19.slice/crio-a88df50b0d0ff9562de732482d47056a69aceba9169381f20800c9ec87eb2194 WatchSource:0}: Error finding container a88df50b0d0ff9562de732482d47056a69aceba9169381f20800c9ec87eb2194: Status 404 returned error can't find the container with id a88df50b0d0ff9562de732482d47056a69aceba9169381f20800c9ec87eb2194 Nov 27 17:33:31 crc kubenswrapper[4792]: I1127 17:33:31.016415 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q6wwh" event={"ID":"19f9765b-6579-4017-9ee9-dcf8f7829b19","Type":"ContainerStarted","Data":"9a2c9775a0039dab8d6364a5505dabdbda47779a7e42cb3903d19cb5a59a4dd8"} Nov 27 17:33:31 crc kubenswrapper[4792]: I1127 17:33:31.016820 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q6wwh" event={"ID":"19f9765b-6579-4017-9ee9-dcf8f7829b19","Type":"ContainerStarted","Data":"a88df50b0d0ff9562de732482d47056a69aceba9169381f20800c9ec87eb2194"} Nov 27 17:33:31 crc kubenswrapper[4792]: I1127 17:33:31.037927 4792 generic.go:334] "Generic (PLEG): container finished" podID="a181cdfc-ad1c-438d-945d-8e77cabce7c9" containerID="7fdf7178243f86bf11f6afea635306d501a1a1e5fdc5bd82c670770043c9b8b9" exitCode=0 Nov 27 17:33:31 crc kubenswrapper[4792]: I1127 17:33:31.038232 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-xbg6r" event={"ID":"a181cdfc-ad1c-438d-945d-8e77cabce7c9","Type":"ContainerDied","Data":"7fdf7178243f86bf11f6afea635306d501a1a1e5fdc5bd82c670770043c9b8b9"} Nov 27 17:33:31 crc kubenswrapper[4792]: I1127 17:33:31.047980 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-q6wwh" podStartSLOduration=2.047954087 podStartE2EDuration="2.047954087s" podCreationTimestamp="2025-11-27 17:33:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:31.038271166 +0000 UTC m=+1433.381097494" watchObservedRunningTime="2025-11-27 17:33:31.047954087 +0000 UTC m=+1433.390780405" Nov 27 17:33:31 crc kubenswrapper[4792]: I1127 17:33:31.885914 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:33:31 crc kubenswrapper[4792]: I1127 17:33:31.913165 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.093425 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-xbg6r" event={"ID":"a181cdfc-ad1c-438d-945d-8e77cabce7c9","Type":"ContainerStarted","Data":"354f568427650e7e6fe67658b867bd2110132c04b5a8bf57dca5c92c7c3d84c6"} Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.093966 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.095461 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c22b6e64-ba33-4da4-8c30-e202bace8c2e","Type":"ContainerStarted","Data":"645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85"} Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.097120 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"52234aa9-87fd-45a8-9c3c-914366e1bbbd","Type":"ContainerStarted","Data":"aad9a80612b9e4d7e9b1537bb435d99a82170632cfadaf155f7ad667eb026148"} Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.097221 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="52234aa9-87fd-45a8-9c3c-914366e1bbbd" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://aad9a80612b9e4d7e9b1537bb435d99a82170632cfadaf155f7ad667eb026148" gracePeriod=30 Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.102019 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df22a083-92d1-4897-967b-1f2f27c8b0e8","Type":"ContainerStarted","Data":"da2bc00597d78bb98dd9ef8c1c198e0b33f47617ee826ea169e00f1863a39793"} Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.102067 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df22a083-92d1-4897-967b-1f2f27c8b0e8","Type":"ContainerStarted","Data":"e7071f17acdbddd1c4f9602427bb391a5e2595bd0e62bf4c0b085dba683a7d18"} Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.105657 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70e40a19-798c-4f99-9e2b-ad5232de38a8","Type":"ContainerStarted","Data":"f9e98671857926ea392c8669851375cca0b55e83c92ead28ce37eb321446e09d"} Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.105699 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70e40a19-798c-4f99-9e2b-ad5232de38a8","Type":"ContainerStarted","Data":"b0672e4f21f1b95ba1878b2cc46c64cc2232db8ebeb9ec147c3213ae32fb9d12"} Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.105774 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="70e40a19-798c-4f99-9e2b-ad5232de38a8" containerName="nova-metadata-log" containerID="cri-o://b0672e4f21f1b95ba1878b2cc46c64cc2232db8ebeb9ec147c3213ae32fb9d12" gracePeriod=30 Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.105788 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="70e40a19-798c-4f99-9e2b-ad5232de38a8" containerName="nova-metadata-metadata" containerID="cri-o://f9e98671857926ea392c8669851375cca0b55e83c92ead28ce37eb321446e09d" gracePeriod=30 Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.121060 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7877d89589-xbg6r" podStartSLOduration=7.121036971 podStartE2EDuration="7.121036971s" podCreationTimestamp="2025-11-27 17:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:35.112088279 +0000 UTC m=+1437.454914607" watchObservedRunningTime="2025-11-27 17:33:35.121036971 +0000 UTC m=+1437.463863299" Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.138032 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.124869872 podStartE2EDuration="7.137637895s" podCreationTimestamp="2025-11-27 17:33:28 +0000 UTC" firstStartedPulling="2025-11-27 17:33:29.691196985 +0000 UTC m=+1432.034023303" lastFinishedPulling="2025-11-27 17:33:33.703965008 +0000 UTC m=+1436.046791326" observedRunningTime="2025-11-27 17:33:35.132865146 +0000 UTC m=+1437.475691464" watchObservedRunningTime="2025-11-27 17:33:35.137637895 +0000 UTC m=+1437.480464213" Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.161850 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.966405087 podStartE2EDuration="7.161828127s" podCreationTimestamp="2025-11-27 17:33:28 +0000 UTC" firstStartedPulling="2025-11-27 17:33:29.520513426 +0000 UTC m=+1431.863339744" lastFinishedPulling="2025-11-27 17:33:33.715936466 +0000 UTC m=+1436.058762784" observedRunningTime="2025-11-27 17:33:35.148368142 +0000 UTC m=+1437.491194460" watchObservedRunningTime="2025-11-27 17:33:35.161828127 +0000 UTC m=+1437.504654445" Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.174247 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.835211911 podStartE2EDuration="7.174232825s" podCreationTimestamp="2025-11-27 17:33:28 +0000 UTC" firstStartedPulling="2025-11-27 17:33:29.364558835 +0000 UTC m=+1431.707385153" lastFinishedPulling="2025-11-27 17:33:33.703579749 +0000 UTC m=+1436.046406067" observedRunningTime="2025-11-27 17:33:35.166538464 +0000 UTC m=+1437.509364782" watchObservedRunningTime="2025-11-27 17:33:35.174232825 +0000 UTC m=+1437.517059143" Nov 27 17:33:35 crc kubenswrapper[4792]: I1127 17:33:35.191754 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.143050384 podStartE2EDuration="7.191732991s" podCreationTimestamp="2025-11-27 17:33:28 +0000 UTC" firstStartedPulling="2025-11-27 17:33:29.654912212 +0000 UTC m=+1431.997738530" lastFinishedPulling="2025-11-27 17:33:33.703594819 +0000 UTC m=+1436.046421137" observedRunningTime="2025-11-27 17:33:35.184576213 +0000 UTC m=+1437.527402531" watchObservedRunningTime="2025-11-27 17:33:35.191732991 +0000 UTC m=+1437.534559299" Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.124626 4792 generic.go:334] "Generic (PLEG): container finished" podID="70e40a19-798c-4f99-9e2b-ad5232de38a8" containerID="f9e98671857926ea392c8669851375cca0b55e83c92ead28ce37eb321446e09d" exitCode=0 Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.125234 4792 generic.go:334] "Generic (PLEG): container finished" podID="70e40a19-798c-4f99-9e2b-ad5232de38a8" containerID="b0672e4f21f1b95ba1878b2cc46c64cc2232db8ebeb9ec147c3213ae32fb9d12" exitCode=143 Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.125073 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70e40a19-798c-4f99-9e2b-ad5232de38a8","Type":"ContainerDied","Data":"f9e98671857926ea392c8669851375cca0b55e83c92ead28ce37eb321446e09d"} Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.125399 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70e40a19-798c-4f99-9e2b-ad5232de38a8","Type":"ContainerDied","Data":"b0672e4f21f1b95ba1878b2cc46c64cc2232db8ebeb9ec147c3213ae32fb9d12"} Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.300519 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.349332 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.459996 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e40a19-798c-4f99-9e2b-ad5232de38a8-config-data\") pod \"70e40a19-798c-4f99-9e2b-ad5232de38a8\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.460043 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e40a19-798c-4f99-9e2b-ad5232de38a8-combined-ca-bundle\") pod \"70e40a19-798c-4f99-9e2b-ad5232de38a8\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.460072 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tjgn\" (UniqueName: \"kubernetes.io/projected/70e40a19-798c-4f99-9e2b-ad5232de38a8-kube-api-access-2tjgn\") pod \"70e40a19-798c-4f99-9e2b-ad5232de38a8\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.460848 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e40a19-798c-4f99-9e2b-ad5232de38a8-logs\") pod \"70e40a19-798c-4f99-9e2b-ad5232de38a8\" (UID: \"70e40a19-798c-4f99-9e2b-ad5232de38a8\") " Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.461137 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e40a19-798c-4f99-9e2b-ad5232de38a8-logs" (OuterVolumeSpecName: "logs") pod "70e40a19-798c-4f99-9e2b-ad5232de38a8" (UID: "70e40a19-798c-4f99-9e2b-ad5232de38a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.461933 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e40a19-798c-4f99-9e2b-ad5232de38a8-logs\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.472896 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e40a19-798c-4f99-9e2b-ad5232de38a8-kube-api-access-2tjgn" (OuterVolumeSpecName: "kube-api-access-2tjgn") pod "70e40a19-798c-4f99-9e2b-ad5232de38a8" (UID: "70e40a19-798c-4f99-9e2b-ad5232de38a8"). InnerVolumeSpecName "kube-api-access-2tjgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.495277 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70e40a19-798c-4f99-9e2b-ad5232de38a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70e40a19-798c-4f99-9e2b-ad5232de38a8" (UID: "70e40a19-798c-4f99-9e2b-ad5232de38a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.497484 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70e40a19-798c-4f99-9e2b-ad5232de38a8-config-data" (OuterVolumeSpecName: "config-data") pod "70e40a19-798c-4f99-9e2b-ad5232de38a8" (UID: "70e40a19-798c-4f99-9e2b-ad5232de38a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.564506 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e40a19-798c-4f99-9e2b-ad5232de38a8-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.564799 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e40a19-798c-4f99-9e2b-ad5232de38a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:36 crc kubenswrapper[4792]: I1127 17:33:36.564879 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tjgn\" (UniqueName: \"kubernetes.io/projected/70e40a19-798c-4f99-9e2b-ad5232de38a8-kube-api-access-2tjgn\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.138604 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70e40a19-798c-4f99-9e2b-ad5232de38a8","Type":"ContainerDied","Data":"4418f43b6c26a6c225887d4a3728b066edfacb56d1f718642c4b7f0cc0a48a6e"} Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.138695 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.139480 4792 scope.go:117] "RemoveContainer" containerID="f9e98671857926ea392c8669851375cca0b55e83c92ead28ce37eb321446e09d" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.173793 4792 scope.go:117] "RemoveContainer" containerID="b0672e4f21f1b95ba1878b2cc46c64cc2232db8ebeb9ec147c3213ae32fb9d12" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.177858 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.218107 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.238765 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:33:37 crc kubenswrapper[4792]: E1127 17:33:37.239896 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e40a19-798c-4f99-9e2b-ad5232de38a8" containerName="nova-metadata-log" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.239926 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e40a19-798c-4f99-9e2b-ad5232de38a8" containerName="nova-metadata-log" Nov 27 17:33:37 crc kubenswrapper[4792]: E1127 17:33:37.239963 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e40a19-798c-4f99-9e2b-ad5232de38a8" containerName="nova-metadata-metadata" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.239971 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e40a19-798c-4f99-9e2b-ad5232de38a8" containerName="nova-metadata-metadata" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.240732 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e40a19-798c-4f99-9e2b-ad5232de38a8" containerName="nova-metadata-log" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.240947 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e40a19-798c-4f99-9e2b-ad5232de38a8" containerName="nova-metadata-metadata" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.243878 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.247718 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.249438 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.258631 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.341231 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgsw2\" (UniqueName: \"kubernetes.io/projected/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-kube-api-access-rgsw2\") pod \"nova-metadata-0\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.341402 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.341452 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.341533 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-config-data\") pod \"nova-metadata-0\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.341971 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-logs\") pod \"nova-metadata-0\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.456812 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.457084 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-config-data\") pod \"nova-metadata-0\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.457282 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-logs\") pod \"nova-metadata-0\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.457418 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgsw2\" (UniqueName: \"kubernetes.io/projected/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-kube-api-access-rgsw2\") pod \"nova-metadata-0\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.457732 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.458042 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-logs\") pod \"nova-metadata-0\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.462311 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-config-data\") pod \"nova-metadata-0\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.464656 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.473275 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.474047 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgsw2\" (UniqueName: \"kubernetes.io/projected/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-kube-api-access-rgsw2\") pod \"nova-metadata-0\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " pod="openstack/nova-metadata-0" Nov 27 17:33:37 crc kubenswrapper[4792]: I1127 17:33:37.569399 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 27 17:33:38 crc kubenswrapper[4792]: I1127 17:33:38.086204 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:33:38 crc kubenswrapper[4792]: W1127 17:33:38.087871 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda040166b_68e4_42d0_a2d6_8863c2e4b1e6.slice/crio-7a73e24dd9aebab0a55dadf5baaf74e7ab787a2d8c30bb0c70773ea8c1bfb323 WatchSource:0}: Error finding container 7a73e24dd9aebab0a55dadf5baaf74e7ab787a2d8c30bb0c70773ea8c1bfb323: Status 404 returned error can't find the container with id 7a73e24dd9aebab0a55dadf5baaf74e7ab787a2d8c30bb0c70773ea8c1bfb323 Nov 27 17:33:38 crc kubenswrapper[4792]: I1127 17:33:38.155267 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a040166b-68e4-42d0-a2d6-8863c2e4b1e6","Type":"ContainerStarted","Data":"7a73e24dd9aebab0a55dadf5baaf74e7ab787a2d8c30bb0c70773ea8c1bfb323"} Nov 27 17:33:38 crc kubenswrapper[4792]: I1127 17:33:38.384441 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 27 17:33:38 crc kubenswrapper[4792]: I1127 17:33:38.384748 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 27 17:33:38 crc kubenswrapper[4792]: I1127 17:33:38.417874 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 27 17:33:38 crc kubenswrapper[4792]: I1127 17:33:38.569948 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 27 17:33:38 crc kubenswrapper[4792]: I1127 17:33:38.570032 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 27 17:33:38 crc kubenswrapper[4792]: I1127 17:33:38.703184 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e40a19-798c-4f99-9e2b-ad5232de38a8" path="/var/lib/kubelet/pods/70e40a19-798c-4f99-9e2b-ad5232de38a8/volumes" Nov 27 17:33:38 crc kubenswrapper[4792]: I1127 17:33:38.900973 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:33:39 crc kubenswrapper[4792]: I1127 17:33:39.177543 4792 generic.go:334] "Generic (PLEG): container finished" podID="8333cb7e-8739-4af6-a1eb-775aa791fb82" containerID="87fe28e874f1e0d0ef67214423c214c2891dbf9306f72c0d2034c84ddd18d717" exitCode=0 Nov 27 17:33:39 crc kubenswrapper[4792]: I1127 17:33:39.177655 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h9ssd" event={"ID":"8333cb7e-8739-4af6-a1eb-775aa791fb82","Type":"ContainerDied","Data":"87fe28e874f1e0d0ef67214423c214c2891dbf9306f72c0d2034c84ddd18d717"} Nov 27 17:33:39 crc kubenswrapper[4792]: I1127 17:33:39.185711 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a040166b-68e4-42d0-a2d6-8863c2e4b1e6","Type":"ContainerStarted","Data":"c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0"} Nov 27 17:33:39 crc kubenswrapper[4792]: I1127 17:33:39.185762 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a040166b-68e4-42d0-a2d6-8863c2e4b1e6","Type":"ContainerStarted","Data":"be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924"} Nov 27 17:33:39 crc kubenswrapper[4792]: I1127 17:33:39.233047 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.233027994 podStartE2EDuration="2.233027994s" podCreationTimestamp="2025-11-27 17:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:39.227577739 +0000 UTC m=+1441.570404057" watchObservedRunningTime="2025-11-27 17:33:39.233027994 +0000 UTC m=+1441.575854312" Nov 27 17:33:39 crc kubenswrapper[4792]: I1127 17:33:39.249404 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 27 17:33:39 crc kubenswrapper[4792]: I1127 17:33:39.651933 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="df22a083-92d1-4897-967b-1f2f27c8b0e8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.233:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 27 17:33:39 crc kubenswrapper[4792]: I1127 17:33:39.652414 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="df22a083-92d1-4897-967b-1f2f27c8b0e8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.233:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 27 17:33:40 crc kubenswrapper[4792]: I1127 17:33:40.199782 4792 generic.go:334] "Generic (PLEG): container finished" podID="19f9765b-6579-4017-9ee9-dcf8f7829b19" containerID="9a2c9775a0039dab8d6364a5505dabdbda47779a7e42cb3903d19cb5a59a4dd8" exitCode=0 Nov 27 17:33:40 crc kubenswrapper[4792]: I1127 17:33:40.199874 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q6wwh" event={"ID":"19f9765b-6579-4017-9ee9-dcf8f7829b19","Type":"ContainerDied","Data":"9a2c9775a0039dab8d6364a5505dabdbda47779a7e42cb3903d19cb5a59a4dd8"} Nov 27 17:33:40 crc kubenswrapper[4792]: I1127 17:33:40.737061 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:40 crc kubenswrapper[4792]: I1127 17:33:40.833090 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-config-data\") pod \"8333cb7e-8739-4af6-a1eb-775aa791fb82\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " Nov 27 17:33:40 crc kubenswrapper[4792]: I1127 17:33:40.833687 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-combined-ca-bundle\") pod \"8333cb7e-8739-4af6-a1eb-775aa791fb82\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " Nov 27 17:33:40 crc kubenswrapper[4792]: I1127 17:33:40.833881 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ckjx\" (UniqueName: \"kubernetes.io/projected/8333cb7e-8739-4af6-a1eb-775aa791fb82-kube-api-access-7ckjx\") pod \"8333cb7e-8739-4af6-a1eb-775aa791fb82\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " Nov 27 17:33:40 crc kubenswrapper[4792]: I1127 17:33:40.833945 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-scripts\") pod \"8333cb7e-8739-4af6-a1eb-775aa791fb82\" (UID: \"8333cb7e-8739-4af6-a1eb-775aa791fb82\") " Nov 27 17:33:40 crc kubenswrapper[4792]: I1127 17:33:40.854988 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-scripts" (OuterVolumeSpecName: "scripts") pod "8333cb7e-8739-4af6-a1eb-775aa791fb82" (UID: "8333cb7e-8739-4af6-a1eb-775aa791fb82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:40 crc kubenswrapper[4792]: I1127 17:33:40.856589 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8333cb7e-8739-4af6-a1eb-775aa791fb82-kube-api-access-7ckjx" (OuterVolumeSpecName: "kube-api-access-7ckjx") pod "8333cb7e-8739-4af6-a1eb-775aa791fb82" (UID: "8333cb7e-8739-4af6-a1eb-775aa791fb82"). InnerVolumeSpecName "kube-api-access-7ckjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:40 crc kubenswrapper[4792]: I1127 17:33:40.871964 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8333cb7e-8739-4af6-a1eb-775aa791fb82" (UID: "8333cb7e-8739-4af6-a1eb-775aa791fb82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:40 crc kubenswrapper[4792]: I1127 17:33:40.897297 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-config-data" (OuterVolumeSpecName: "config-data") pod "8333cb7e-8739-4af6-a1eb-775aa791fb82" (UID: "8333cb7e-8739-4af6-a1eb-775aa791fb82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:40 crc kubenswrapper[4792]: I1127 17:33:40.936539 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:40 crc kubenswrapper[4792]: I1127 17:33:40.936576 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:40 crc kubenswrapper[4792]: I1127 17:33:40.936587 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ckjx\" (UniqueName: \"kubernetes.io/projected/8333cb7e-8739-4af6-a1eb-775aa791fb82-kube-api-access-7ckjx\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:40 crc kubenswrapper[4792]: I1127 17:33:40.936595 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8333cb7e-8739-4af6-a1eb-775aa791fb82-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.212841 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h9ssd" event={"ID":"8333cb7e-8739-4af6-a1eb-775aa791fb82","Type":"ContainerDied","Data":"c03126196e06df1d7c5d26a866ae4df2c4f87b9b2486f758c301a9e500787175"} Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.213948 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c03126196e06df1d7c5d26a866ae4df2c4f87b9b2486f758c301a9e500787175" Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.212884 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h9ssd" Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.385554 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.386154 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="df22a083-92d1-4897-967b-1f2f27c8b0e8" containerName="nova-api-log" containerID="cri-o://e7071f17acdbddd1c4f9602427bb391a5e2595bd0e62bf4c0b085dba683a7d18" gracePeriod=30 Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.386293 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="df22a083-92d1-4897-967b-1f2f27c8b0e8" containerName="nova-api-api" containerID="cri-o://da2bc00597d78bb98dd9ef8c1c198e0b33f47617ee826ea169e00f1863a39793" gracePeriod=30 Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.406423 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.406632 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c22b6e64-ba33-4da4-8c30-e202bace8c2e" containerName="nova-scheduler-scheduler" containerID="cri-o://645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85" gracePeriod=30 Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.429399 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.429594 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a040166b-68e4-42d0-a2d6-8863c2e4b1e6" containerName="nova-metadata-log" containerID="cri-o://be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924" gracePeriod=30 Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.430050 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a040166b-68e4-42d0-a2d6-8863c2e4b1e6" containerName="nova-metadata-metadata" containerID="cri-o://c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0" gracePeriod=30 Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.582264 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.654704 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlcjt\" (UniqueName: \"kubernetes.io/projected/19f9765b-6579-4017-9ee9-dcf8f7829b19-kube-api-access-wlcjt\") pod \"19f9765b-6579-4017-9ee9-dcf8f7829b19\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.654804 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-scripts\") pod \"19f9765b-6579-4017-9ee9-dcf8f7829b19\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.655227 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-config-data\") pod \"19f9765b-6579-4017-9ee9-dcf8f7829b19\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.655281 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-combined-ca-bundle\") pod \"19f9765b-6579-4017-9ee9-dcf8f7829b19\" (UID: \"19f9765b-6579-4017-9ee9-dcf8f7829b19\") " Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.660970 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-scripts" (OuterVolumeSpecName: "scripts") pod "19f9765b-6579-4017-9ee9-dcf8f7829b19" (UID: "19f9765b-6579-4017-9ee9-dcf8f7829b19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.666992 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19f9765b-6579-4017-9ee9-dcf8f7829b19-kube-api-access-wlcjt" (OuterVolumeSpecName: "kube-api-access-wlcjt") pod "19f9765b-6579-4017-9ee9-dcf8f7829b19" (UID: "19f9765b-6579-4017-9ee9-dcf8f7829b19"). InnerVolumeSpecName "kube-api-access-wlcjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.715405 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-config-data" (OuterVolumeSpecName: "config-data") pod "19f9765b-6579-4017-9ee9-dcf8f7829b19" (UID: "19f9765b-6579-4017-9ee9-dcf8f7829b19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.735068 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19f9765b-6579-4017-9ee9-dcf8f7829b19" (UID: "19f9765b-6579-4017-9ee9-dcf8f7829b19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.758066 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.758100 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.758113 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlcjt\" (UniqueName: \"kubernetes.io/projected/19f9765b-6579-4017-9ee9-dcf8f7829b19-kube-api-access-wlcjt\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:41 crc kubenswrapper[4792]: I1127 17:33:41.758124 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19f9765b-6579-4017-9ee9-dcf8f7829b19-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.088766 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.170192 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-nova-metadata-tls-certs\") pod \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.171002 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-config-data\") pod \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.171114 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-combined-ca-bundle\") pod \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.171229 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgsw2\" (UniqueName: \"kubernetes.io/projected/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-kube-api-access-rgsw2\") pod \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.171262 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-logs\") pod \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\" (UID: \"a040166b-68e4-42d0-a2d6-8863c2e4b1e6\") " Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.172274 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-logs" (OuterVolumeSpecName: "logs") pod "a040166b-68e4-42d0-a2d6-8863c2e4b1e6" (UID: "a040166b-68e4-42d0-a2d6-8863c2e4b1e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.177685 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-kube-api-access-rgsw2" (OuterVolumeSpecName: "kube-api-access-rgsw2") pod "a040166b-68e4-42d0-a2d6-8863c2e4b1e6" (UID: "a040166b-68e4-42d0-a2d6-8863c2e4b1e6"). InnerVolumeSpecName "kube-api-access-rgsw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.222297 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a040166b-68e4-42d0-a2d6-8863c2e4b1e6" (UID: "a040166b-68e4-42d0-a2d6-8863c2e4b1e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.233775 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-config-data" (OuterVolumeSpecName: "config-data") pod "a040166b-68e4-42d0-a2d6-8863c2e4b1e6" (UID: "a040166b-68e4-42d0-a2d6-8863c2e4b1e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.244181 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q6wwh" event={"ID":"19f9765b-6579-4017-9ee9-dcf8f7829b19","Type":"ContainerDied","Data":"a88df50b0d0ff9562de732482d47056a69aceba9169381f20800c9ec87eb2194"} Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.244226 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a88df50b0d0ff9562de732482d47056a69aceba9169381f20800c9ec87eb2194" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.244322 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q6wwh" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.272033 4792 generic.go:334] "Generic (PLEG): container finished" podID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerID="44348f00d6e93bda70de7e54fc92788369d8ce58da00288fb2bef8661b00088d" exitCode=137 Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.273489 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2bd29c6-0b83-4b93-8c18-e4733154d3d9","Type":"ContainerDied","Data":"44348f00d6e93bda70de7e54fc92788369d8ce58da00288fb2bef8661b00088d"} Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.277053 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a040166b-68e4-42d0-a2d6-8863c2e4b1e6" (UID: "a040166b-68e4-42d0-a2d6-8863c2e4b1e6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.277146 4792 generic.go:334] "Generic (PLEG): container finished" podID="a040166b-68e4-42d0-a2d6-8863c2e4b1e6" containerID="c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0" exitCode=0 Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.277168 4792 generic.go:334] "Generic (PLEG): container finished" podID="a040166b-68e4-42d0-a2d6-8863c2e4b1e6" containerID="be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924" exitCode=143 Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.277245 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.277325 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a040166b-68e4-42d0-a2d6-8863c2e4b1e6","Type":"ContainerDied","Data":"c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0"} Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.277349 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a040166b-68e4-42d0-a2d6-8863c2e4b1e6","Type":"ContainerDied","Data":"be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924"} Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.277358 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a040166b-68e4-42d0-a2d6-8863c2e4b1e6","Type":"ContainerDied","Data":"7a73e24dd9aebab0a55dadf5baaf74e7ab787a2d8c30bb0c70773ea8c1bfb323"} Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.277371 4792 scope.go:117] "RemoveContainer" containerID="c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.278410 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.281332 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgsw2\" (UniqueName: \"kubernetes.io/projected/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-kube-api-access-rgsw2\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.281360 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-logs\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.281372 4792 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.281383 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a040166b-68e4-42d0-a2d6-8863c2e4b1e6-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.299846 4792 generic.go:334] "Generic (PLEG): container finished" podID="df22a083-92d1-4897-967b-1f2f27c8b0e8" containerID="e7071f17acdbddd1c4f9602427bb391a5e2595bd0e62bf4c0b085dba683a7d18" exitCode=143 Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.299893 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df22a083-92d1-4897-967b-1f2f27c8b0e8","Type":"ContainerDied","Data":"e7071f17acdbddd1c4f9602427bb391a5e2595bd0e62bf4c0b085dba683a7d18"} Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.321076 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.324603 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 27 17:33:42 crc kubenswrapper[4792]: E1127 17:33:42.325080 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a040166b-68e4-42d0-a2d6-8863c2e4b1e6" containerName="nova-metadata-metadata" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325097 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a040166b-68e4-42d0-a2d6-8863c2e4b1e6" containerName="nova-metadata-metadata" Nov 27 17:33:42 crc kubenswrapper[4792]: E1127 17:33:42.325109 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8333cb7e-8739-4af6-a1eb-775aa791fb82" containerName="nova-manage" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325115 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8333cb7e-8739-4af6-a1eb-775aa791fb82" containerName="nova-manage" Nov 27 17:33:42 crc kubenswrapper[4792]: E1127 17:33:42.325137 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="ceilometer-central-agent" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325144 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="ceilometer-central-agent" Nov 27 17:33:42 crc kubenswrapper[4792]: E1127 17:33:42.325164 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="proxy-httpd" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325171 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="proxy-httpd" Nov 27 17:33:42 crc kubenswrapper[4792]: E1127 17:33:42.325180 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="sg-core" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325186 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="sg-core" Nov 27 17:33:42 crc kubenswrapper[4792]: E1127 17:33:42.325199 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a040166b-68e4-42d0-a2d6-8863c2e4b1e6" containerName="nova-metadata-log" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325205 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a040166b-68e4-42d0-a2d6-8863c2e4b1e6" containerName="nova-metadata-log" Nov 27 17:33:42 crc kubenswrapper[4792]: E1127 17:33:42.325217 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19f9765b-6579-4017-9ee9-dcf8f7829b19" containerName="nova-cell1-conductor-db-sync" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325223 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f9765b-6579-4017-9ee9-dcf8f7829b19" containerName="nova-cell1-conductor-db-sync" Nov 27 17:33:42 crc kubenswrapper[4792]: E1127 17:33:42.325234 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="ceilometer-notification-agent" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325240 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="ceilometer-notification-agent" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325444 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="19f9765b-6579-4017-9ee9-dcf8f7829b19" containerName="nova-cell1-conductor-db-sync" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325457 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="ceilometer-central-agent" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325474 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a040166b-68e4-42d0-a2d6-8863c2e4b1e6" containerName="nova-metadata-metadata" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325489 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="ceilometer-notification-agent" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325683 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="proxy-httpd" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325701 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a040166b-68e4-42d0-a2d6-8863c2e4b1e6" containerName="nova-metadata-log" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325724 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" containerName="sg-core" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.325732 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8333cb7e-8739-4af6-a1eb-775aa791fb82" containerName="nova-manage" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.326489 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.336737 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.344256 4792 scope.go:117] "RemoveContainer" containerID="be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.349223 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.379713 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.382310 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-log-httpd\") pod \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.382367 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-scripts\") pod \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.382426 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-combined-ca-bundle\") pod \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.382453 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-run-httpd\") pod \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.382503 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkz2d\" (UniqueName: \"kubernetes.io/projected/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-kube-api-access-rkz2d\") pod \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.382558 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-sg-core-conf-yaml\") pod \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.382732 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-config-data\") pod \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\" (UID: \"c2bd29c6-0b83-4b93-8c18-e4733154d3d9\") " Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.383047 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6765\" (UniqueName: \"kubernetes.io/projected/3c9b4c85-0700-45e9-b663-ca02ecf5009d-kube-api-access-z6765\") pod \"nova-cell1-conductor-0\" (UID: \"3c9b4c85-0700-45e9-b663-ca02ecf5009d\") " pod="openstack/nova-cell1-conductor-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.383223 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9b4c85-0700-45e9-b663-ca02ecf5009d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3c9b4c85-0700-45e9-b663-ca02ecf5009d\") " pod="openstack/nova-cell1-conductor-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.383299 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c9b4c85-0700-45e9-b663-ca02ecf5009d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3c9b4c85-0700-45e9-b663-ca02ecf5009d\") " pod="openstack/nova-cell1-conductor-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.384259 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c2bd29c6-0b83-4b93-8c18-e4733154d3d9" (UID: "c2bd29c6-0b83-4b93-8c18-e4733154d3d9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.387681 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-scripts" (OuterVolumeSpecName: "scripts") pod "c2bd29c6-0b83-4b93-8c18-e4733154d3d9" (UID: "c2bd29c6-0b83-4b93-8c18-e4733154d3d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.391757 4792 scope.go:117] "RemoveContainer" containerID="c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0" Nov 27 17:33:42 crc kubenswrapper[4792]: E1127 17:33:42.393168 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0\": container with ID starting with c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0 not found: ID does not exist" containerID="c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.393223 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0"} err="failed to get container status \"c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0\": rpc error: code = NotFound desc = could not find container \"c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0\": container with ID starting with c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0 not found: ID does not exist" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.393253 4792 scope.go:117] "RemoveContainer" containerID="be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.393914 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c2bd29c6-0b83-4b93-8c18-e4733154d3d9" (UID: "c2bd29c6-0b83-4b93-8c18-e4733154d3d9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:33:42 crc kubenswrapper[4792]: E1127 17:33:42.394376 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924\": container with ID starting with be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924 not found: ID does not exist" containerID="be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.394403 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924"} err="failed to get container status \"be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924\": rpc error: code = NotFound desc = could not find container \"be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924\": container with ID starting with be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924 not found: ID does not exist" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.394424 4792 scope.go:117] "RemoveContainer" containerID="c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.420928 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0"} err="failed to get container status \"c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0\": rpc error: code = NotFound desc = could not find container \"c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0\": container with ID starting with c8cee08992c226970f7474651319c2e82a40436d741fbe10d11f1405183792c0 not found: ID does not exist" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.420977 4792 scope.go:117] "RemoveContainer" containerID="be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.422657 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-kube-api-access-rkz2d" (OuterVolumeSpecName: "kube-api-access-rkz2d") pod "c2bd29c6-0b83-4b93-8c18-e4733154d3d9" (UID: "c2bd29c6-0b83-4b93-8c18-e4733154d3d9"). InnerVolumeSpecName "kube-api-access-rkz2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.422843 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924"} err="failed to get container status \"be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924\": rpc error: code = NotFound desc = could not find container \"be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924\": container with ID starting with be6d7f2f6b64aeb531c877cc55bd95eef785c81891aa6f2ba1ff266ad82b2924 not found: ID does not exist" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.431713 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.443790 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c2bd29c6-0b83-4b93-8c18-e4733154d3d9" (UID: "c2bd29c6-0b83-4b93-8c18-e4733154d3d9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.467920 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.469968 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.472593 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.477322 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.493212 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.493282 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-logs\") pod \"nova-metadata-0\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.493316 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c9b4c85-0700-45e9-b663-ca02ecf5009d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3c9b4c85-0700-45e9-b663-ca02ecf5009d\") " pod="openstack/nova-cell1-conductor-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.493353 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6765\" (UniqueName: \"kubernetes.io/projected/3c9b4c85-0700-45e9-b663-ca02ecf5009d-kube-api-access-z6765\") pod \"nova-cell1-conductor-0\" (UID: \"3c9b4c85-0700-45e9-b663-ca02ecf5009d\") " pod="openstack/nova-cell1-conductor-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.493425 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-config-data\") pod \"nova-metadata-0\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.493545 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.493581 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dvd8\" (UniqueName: \"kubernetes.io/projected/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-kube-api-access-2dvd8\") pod \"nova-metadata-0\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.493686 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9b4c85-0700-45e9-b663-ca02ecf5009d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3c9b4c85-0700-45e9-b663-ca02ecf5009d\") " pod="openstack/nova-cell1-conductor-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.493761 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.493774 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.493786 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.493797 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkz2d\" (UniqueName: \"kubernetes.io/projected/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-kube-api-access-rkz2d\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.493808 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.508334 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c9b4c85-0700-45e9-b663-ca02ecf5009d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3c9b4c85-0700-45e9-b663-ca02ecf5009d\") " pod="openstack/nova-cell1-conductor-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.513310 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9b4c85-0700-45e9-b663-ca02ecf5009d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3c9b4c85-0700-45e9-b663-ca02ecf5009d\") " pod="openstack/nova-cell1-conductor-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.533281 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6765\" (UniqueName: \"kubernetes.io/projected/3c9b4c85-0700-45e9-b663-ca02ecf5009d-kube-api-access-z6765\") pod \"nova-cell1-conductor-0\" (UID: \"3c9b4c85-0700-45e9-b663-ca02ecf5009d\") " pod="openstack/nova-cell1-conductor-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.561341 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.574806 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2bd29c6-0b83-4b93-8c18-e4733154d3d9" (UID: "c2bd29c6-0b83-4b93-8c18-e4733154d3d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.596970 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.597027 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dvd8\" (UniqueName: \"kubernetes.io/projected/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-kube-api-access-2dvd8\") pod \"nova-metadata-0\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.597123 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.597151 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-logs\") pod \"nova-metadata-0\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.597202 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-config-data\") pod \"nova-metadata-0\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.597253 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.601320 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-config-data\") pod \"nova-metadata-0\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.601328 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-logs\") pod \"nova-metadata-0\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.601468 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.603021 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.640355 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dvd8\" (UniqueName: \"kubernetes.io/projected/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-kube-api-access-2dvd8\") pod \"nova-metadata-0\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.651813 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-config-data" (OuterVolumeSpecName: "config-data") pod "c2bd29c6-0b83-4b93-8c18-e4733154d3d9" (UID: "c2bd29c6-0b83-4b93-8c18-e4733154d3d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.663464 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.701238 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2bd29c6-0b83-4b93-8c18-e4733154d3d9-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.721037 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a040166b-68e4-42d0-a2d6-8863c2e4b1e6" path="/var/lib/kubelet/pods/a040166b-68e4-42d0-a2d6-8863c2e4b1e6/volumes" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.808964 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.906778 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-n4j6s"] Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.909353 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-n4j6s" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.931116 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-7573-account-create-update-xsrqm"] Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.933228 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7573-account-create-update-xsrqm" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.935692 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.942974 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-n4j6s"] Nov 27 17:33:42 crc kubenswrapper[4792]: I1127 17:33:42.957817 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-7573-account-create-update-xsrqm"] Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.010656 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfmkh\" (UniqueName: \"kubernetes.io/projected/0eacffe2-c3ab-4816-9d52-3c73de2d37cf-kube-api-access-zfmkh\") pod \"aodh-7573-account-create-update-xsrqm\" (UID: \"0eacffe2-c3ab-4816-9d52-3c73de2d37cf\") " pod="openstack/aodh-7573-account-create-update-xsrqm" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.010755 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4dmr\" (UniqueName: \"kubernetes.io/projected/c4be33e9-5259-40d3-9496-c1836cb67060-kube-api-access-w4dmr\") pod \"aodh-db-create-n4j6s\" (UID: \"c4be33e9-5259-40d3-9496-c1836cb67060\") " pod="openstack/aodh-db-create-n4j6s" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.010792 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4be33e9-5259-40d3-9496-c1836cb67060-operator-scripts\") pod \"aodh-db-create-n4j6s\" (UID: \"c4be33e9-5259-40d3-9496-c1836cb67060\") " pod="openstack/aodh-db-create-n4j6s" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.010936 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eacffe2-c3ab-4816-9d52-3c73de2d37cf-operator-scripts\") pod \"aodh-7573-account-create-update-xsrqm\" (UID: \"0eacffe2-c3ab-4816-9d52-3c73de2d37cf\") " pod="openstack/aodh-7573-account-create-update-xsrqm" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.113328 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4be33e9-5259-40d3-9496-c1836cb67060-operator-scripts\") pod \"aodh-db-create-n4j6s\" (UID: \"c4be33e9-5259-40d3-9496-c1836cb67060\") " pod="openstack/aodh-db-create-n4j6s" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.113569 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eacffe2-c3ab-4816-9d52-3c73de2d37cf-operator-scripts\") pod \"aodh-7573-account-create-update-xsrqm\" (UID: \"0eacffe2-c3ab-4816-9d52-3c73de2d37cf\") " pod="openstack/aodh-7573-account-create-update-xsrqm" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.113667 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfmkh\" (UniqueName: \"kubernetes.io/projected/0eacffe2-c3ab-4816-9d52-3c73de2d37cf-kube-api-access-zfmkh\") pod \"aodh-7573-account-create-update-xsrqm\" (UID: \"0eacffe2-c3ab-4816-9d52-3c73de2d37cf\") " pod="openstack/aodh-7573-account-create-update-xsrqm" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.113765 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4dmr\" (UniqueName: \"kubernetes.io/projected/c4be33e9-5259-40d3-9496-c1836cb67060-kube-api-access-w4dmr\") pod \"aodh-db-create-n4j6s\" (UID: \"c4be33e9-5259-40d3-9496-c1836cb67060\") " pod="openstack/aodh-db-create-n4j6s" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.114986 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4be33e9-5259-40d3-9496-c1836cb67060-operator-scripts\") pod \"aodh-db-create-n4j6s\" (UID: \"c4be33e9-5259-40d3-9496-c1836cb67060\") " pod="openstack/aodh-db-create-n4j6s" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.115165 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eacffe2-c3ab-4816-9d52-3c73de2d37cf-operator-scripts\") pod \"aodh-7573-account-create-update-xsrqm\" (UID: \"0eacffe2-c3ab-4816-9d52-3c73de2d37cf\") " pod="openstack/aodh-7573-account-create-update-xsrqm" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.135611 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4dmr\" (UniqueName: \"kubernetes.io/projected/c4be33e9-5259-40d3-9496-c1836cb67060-kube-api-access-w4dmr\") pod \"aodh-db-create-n4j6s\" (UID: \"c4be33e9-5259-40d3-9496-c1836cb67060\") " pod="openstack/aodh-db-create-n4j6s" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.135952 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfmkh\" (UniqueName: \"kubernetes.io/projected/0eacffe2-c3ab-4816-9d52-3c73de2d37cf-kube-api-access-zfmkh\") pod \"aodh-7573-account-create-update-xsrqm\" (UID: \"0eacffe2-c3ab-4816-9d52-3c73de2d37cf\") " pod="openstack/aodh-7573-account-create-update-xsrqm" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.254271 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-n4j6s" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.278539 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7573-account-create-update-xsrqm" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.286108 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.322401 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.322398 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2bd29c6-0b83-4b93-8c18-e4733154d3d9","Type":"ContainerDied","Data":"bb81203d9e8943a9d6656aeff80a48aef4d4807e00ac5c11c514ff7bb070d064"} Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.322595 4792 scope.go:117] "RemoveContainer" containerID="44348f00d6e93bda70de7e54fc92788369d8ce58da00288fb2bef8661b00088d" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.327592 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3c9b4c85-0700-45e9-b663-ca02ecf5009d","Type":"ContainerStarted","Data":"57610524cdd5980048758762a9701383be7c7acf1facc44904cbabb7aba145b1"} Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.370422 4792 scope.go:117] "RemoveContainer" containerID="1a22b4a8b45d7f750953ad80953b3429e0443f62ddbf2641877e993b32e13ddf" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.382451 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:33:43 crc kubenswrapper[4792]: E1127 17:33:43.396686 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.401632 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:33:43 crc kubenswrapper[4792]: E1127 17:33:43.430743 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 27 17:33:43 crc kubenswrapper[4792]: E1127 17:33:43.450030 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 27 17:33:43 crc kubenswrapper[4792]: E1127 17:33:43.450097 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c22b6e64-ba33-4da4-8c30-e202bace8c2e" containerName="nova-scheduler-scheduler" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.471436 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.481291 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.496610 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.496687 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.500270 4792 scope.go:117] "RemoveContainer" containerID="8bbe36b40552e7778425e5f7b91447fc7d75db09f335d02b3f49e321a57dade6" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.549434 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgn8v\" (UniqueName: \"kubernetes.io/projected/4b321af9-a492-4d9c-a75d-54987369d450-kube-api-access-jgn8v\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.549560 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b321af9-a492-4d9c-a75d-54987369d450-run-httpd\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.549619 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b321af9-a492-4d9c-a75d-54987369d450-log-httpd\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.549845 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.549903 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-scripts\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.549967 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-config-data\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.550000 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.563405 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.595769 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.598437 4792 scope.go:117] "RemoveContainer" containerID="b1bc9c74ca2608bfd1db4fd2c3f7fc871f3c551969b6a729b285402be28327cf" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.652454 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.652513 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-scripts\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.652552 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-config-data\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.652574 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.652667 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgn8v\" (UniqueName: \"kubernetes.io/projected/4b321af9-a492-4d9c-a75d-54987369d450-kube-api-access-jgn8v\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.652727 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b321af9-a492-4d9c-a75d-54987369d450-run-httpd\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.652755 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b321af9-a492-4d9c-a75d-54987369d450-log-httpd\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.653330 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b321af9-a492-4d9c-a75d-54987369d450-log-httpd\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.658893 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b321af9-a492-4d9c-a75d-54987369d450-run-httpd\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.661274 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.661610 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.662208 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-scripts\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.668593 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-config-data\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.688456 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgn8v\" (UniqueName: \"kubernetes.io/projected/4b321af9-a492-4d9c-a75d-54987369d450-kube-api-access-jgn8v\") pod \"ceilometer-0\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.848791 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.870026 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-n4j6s"] Nov 27 17:33:43 crc kubenswrapper[4792]: W1127 17:33:43.886367 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4be33e9_5259_40d3_9496_c1836cb67060.slice/crio-0f4226ec5d300c9d7aed1ad4559e6a4665118e0c382fa0aada5baa5610dd9601 WatchSource:0}: Error finding container 0f4226ec5d300c9d7aed1ad4559e6a4665118e0c382fa0aada5baa5610dd9601: Status 404 returned error can't find the container with id 0f4226ec5d300c9d7aed1ad4559e6a4665118e0c382fa0aada5baa5610dd9601 Nov 27 17:33:43 crc kubenswrapper[4792]: I1127 17:33:43.922814 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.056793 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lh72w"] Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.057065 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d978555f9-lh72w" podUID="9c02eb12-16a2-4c2d-849f-0309fd114fd2" containerName="dnsmasq-dns" containerID="cri-o://f328ec11bbe068159b78754ea6a5913c12a4b2e5a52d5c36fb9c02e2925b7398" gracePeriod=10 Nov 27 17:33:44 crc kubenswrapper[4792]: W1127 17:33:44.067893 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eacffe2_c3ab_4816_9d52_3c73de2d37cf.slice/crio-40dad975c511e475a2675dd080041bc2586f96767d85b003044b853c61ef2304 WatchSource:0}: Error finding container 40dad975c511e475a2675dd080041bc2586f96767d85b003044b853c61ef2304: Status 404 returned error can't find the container with id 40dad975c511e475a2675dd080041bc2586f96767d85b003044b853c61ef2304 Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.106013 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-7573-account-create-update-xsrqm"] Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.354532 4792 generic.go:334] "Generic (PLEG): container finished" podID="9c02eb12-16a2-4c2d-849f-0309fd114fd2" containerID="f328ec11bbe068159b78754ea6a5913c12a4b2e5a52d5c36fb9c02e2925b7398" exitCode=0 Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.354818 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-lh72w" event={"ID":"9c02eb12-16a2-4c2d-849f-0309fd114fd2","Type":"ContainerDied","Data":"f328ec11bbe068159b78754ea6a5913c12a4b2e5a52d5c36fb9c02e2925b7398"} Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.357365 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-n4j6s" event={"ID":"c4be33e9-5259-40d3-9496-c1836cb67060","Type":"ContainerStarted","Data":"3daed44d41e87d5ace004f013ff7d2af8ad89f1a9a8bb93a827e2aeefadce950"} Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.357409 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-n4j6s" event={"ID":"c4be33e9-5259-40d3-9496-c1836cb67060","Type":"ContainerStarted","Data":"0f4226ec5d300c9d7aed1ad4559e6a4665118e0c382fa0aada5baa5610dd9601"} Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.361887 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3c9b4c85-0700-45e9-b663-ca02ecf5009d","Type":"ContainerStarted","Data":"691e42b3b21ac3a1f1f646f31620f91adcd839abdbc598807d17b63847ffb2c8"} Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.362636 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.370358 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26","Type":"ContainerStarted","Data":"51ccc6fa544ca15652562c107e8100094758f12ec1a5d680f583d32040c124b3"} Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.370412 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26","Type":"ContainerStarted","Data":"0d71d7732c9a5ccc99e208c15e52f1c3b2c02762bee249a062fc78bfab038885"} Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.379760 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7573-account-create-update-xsrqm" event={"ID":"0eacffe2-c3ab-4816-9d52-3c73de2d37cf","Type":"ContainerStarted","Data":"40dad975c511e475a2675dd080041bc2586f96767d85b003044b853c61ef2304"} Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.391125 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-n4j6s" podStartSLOduration=2.391100645 podStartE2EDuration="2.391100645s" podCreationTimestamp="2025-11-27 17:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:44.373267371 +0000 UTC m=+1446.716093699" watchObservedRunningTime="2025-11-27 17:33:44.391100645 +0000 UTC m=+1446.733926963" Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.418127 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.418103167 podStartE2EDuration="2.418103167s" podCreationTimestamp="2025-11-27 17:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:44.393885164 +0000 UTC m=+1446.736711482" watchObservedRunningTime="2025-11-27 17:33:44.418103167 +0000 UTC m=+1446.760929485" Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.439345 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.439323555 podStartE2EDuration="2.439323555s" podCreationTimestamp="2025-11-27 17:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:44.413989475 +0000 UTC m=+1446.756815793" watchObservedRunningTime="2025-11-27 17:33:44.439323555 +0000 UTC m=+1446.782149873" Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.448667 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-7573-account-create-update-xsrqm" podStartSLOduration=2.448631997 podStartE2EDuration="2.448631997s" podCreationTimestamp="2025-11-27 17:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:44.426007224 +0000 UTC m=+1446.768833532" watchObservedRunningTime="2025-11-27 17:33:44.448631997 +0000 UTC m=+1446.791458325" Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.543348 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:33:44 crc kubenswrapper[4792]: I1127 17:33:44.715226 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2bd29c6-0b83-4b93-8c18-e4733154d3d9" path="/var/lib/kubelet/pods/c2bd29c6-0b83-4b93-8c18-e4733154d3d9/volumes" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.062071 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.206563 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-dns-svc\") pod \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.206711 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-ovsdbserver-nb\") pod \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.206781 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-ovsdbserver-sb\") pod \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.206827 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-config\") pod \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.206865 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzcx6\" (UniqueName: \"kubernetes.io/projected/9c02eb12-16a2-4c2d-849f-0309fd114fd2-kube-api-access-xzcx6\") pod \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.206954 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-dns-swift-storage-0\") pod \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\" (UID: \"9c02eb12-16a2-4c2d-849f-0309fd114fd2\") " Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.236872 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c02eb12-16a2-4c2d-849f-0309fd114fd2-kube-api-access-xzcx6" (OuterVolumeSpecName: "kube-api-access-xzcx6") pod "9c02eb12-16a2-4c2d-849f-0309fd114fd2" (UID: "9c02eb12-16a2-4c2d-849f-0309fd114fd2"). InnerVolumeSpecName "kube-api-access-xzcx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.288593 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9c02eb12-16a2-4c2d-849f-0309fd114fd2" (UID: "9c02eb12-16a2-4c2d-849f-0309fd114fd2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.294567 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9c02eb12-16a2-4c2d-849f-0309fd114fd2" (UID: "9c02eb12-16a2-4c2d-849f-0309fd114fd2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.304193 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9c02eb12-16a2-4c2d-849f-0309fd114fd2" (UID: "9c02eb12-16a2-4c2d-849f-0309fd114fd2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.309226 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.309255 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.309265 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzcx6\" (UniqueName: \"kubernetes.io/projected/9c02eb12-16a2-4c2d-849f-0309fd114fd2-kube-api-access-xzcx6\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.309275 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.328796 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-config" (OuterVolumeSpecName: "config") pod "9c02eb12-16a2-4c2d-849f-0309fd114fd2" (UID: "9c02eb12-16a2-4c2d-849f-0309fd114fd2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.401056 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-lh72w" event={"ID":"9c02eb12-16a2-4c2d-849f-0309fd114fd2","Type":"ContainerDied","Data":"f568b9f087c4d319eb1f8d2984f59316cd9c94a8cfa7b48d3490e7ba1a6056ff"} Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.401110 4792 scope.go:117] "RemoveContainer" containerID="f328ec11bbe068159b78754ea6a5913c12a4b2e5a52d5c36fb9c02e2925b7398" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.401244 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-lh72w" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.406771 4792 generic.go:334] "Generic (PLEG): container finished" podID="c4be33e9-5259-40d3-9496-c1836cb67060" containerID="3daed44d41e87d5ace004f013ff7d2af8ad89f1a9a8bb93a827e2aeefadce950" exitCode=0 Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.406823 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-n4j6s" event={"ID":"c4be33e9-5259-40d3-9496-c1836cb67060","Type":"ContainerDied","Data":"3daed44d41e87d5ace004f013ff7d2af8ad89f1a9a8bb93a827e2aeefadce950"} Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.410570 4792 generic.go:334] "Generic (PLEG): container finished" podID="df22a083-92d1-4897-967b-1f2f27c8b0e8" containerID="da2bc00597d78bb98dd9ef8c1c198e0b33f47617ee826ea169e00f1863a39793" exitCode=0 Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.410629 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df22a083-92d1-4897-967b-1f2f27c8b0e8","Type":"ContainerDied","Data":"da2bc00597d78bb98dd9ef8c1c198e0b33f47617ee826ea169e00f1863a39793"} Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.410968 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.413034 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26","Type":"ContainerStarted","Data":"346560f83420792d89a817824de81ee159ac70f54da5e9851bfb8d906dca38db"} Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.416026 4792 generic.go:334] "Generic (PLEG): container finished" podID="0eacffe2-c3ab-4816-9d52-3c73de2d37cf" containerID="0a719a2345f367160540bad33d2b8c13b0ffef85060d9d0865301a309e55ffd9" exitCode=0 Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.416072 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7573-account-create-update-xsrqm" event={"ID":"0eacffe2-c3ab-4816-9d52-3c73de2d37cf","Type":"ContainerDied","Data":"0a719a2345f367160540bad33d2b8c13b0ffef85060d9d0865301a309e55ffd9"} Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.419184 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b321af9-a492-4d9c-a75d-54987369d450","Type":"ContainerStarted","Data":"bde441cbe4cd1e02111bb5a7e9fbe35da6d1f3271a94a28a8487d1ccfae02623"} Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.439264 4792 scope.go:117] "RemoveContainer" containerID="2243131349b87d5f62af0e945742d231ac9b363ca41738f5c2c3678c9865e26e" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.451148 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.453297 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9c02eb12-16a2-4c2d-849f-0309fd114fd2" (UID: "9c02eb12-16a2-4c2d-849f-0309fd114fd2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.514451 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c02eb12-16a2-4c2d-849f-0309fd114fd2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.616621 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df22a083-92d1-4897-967b-1f2f27c8b0e8-logs\") pod \"df22a083-92d1-4897-967b-1f2f27c8b0e8\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.617038 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmkrs\" (UniqueName: \"kubernetes.io/projected/df22a083-92d1-4897-967b-1f2f27c8b0e8-kube-api-access-pmkrs\") pod \"df22a083-92d1-4897-967b-1f2f27c8b0e8\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.617167 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df22a083-92d1-4897-967b-1f2f27c8b0e8-combined-ca-bundle\") pod \"df22a083-92d1-4897-967b-1f2f27c8b0e8\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.617286 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df22a083-92d1-4897-967b-1f2f27c8b0e8-config-data\") pod \"df22a083-92d1-4897-967b-1f2f27c8b0e8\" (UID: \"df22a083-92d1-4897-967b-1f2f27c8b0e8\") " Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.617219 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df22a083-92d1-4897-967b-1f2f27c8b0e8-logs" (OuterVolumeSpecName: "logs") pod "df22a083-92d1-4897-967b-1f2f27c8b0e8" (UID: "df22a083-92d1-4897-967b-1f2f27c8b0e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.618556 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df22a083-92d1-4897-967b-1f2f27c8b0e8-logs\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.622998 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df22a083-92d1-4897-967b-1f2f27c8b0e8-kube-api-access-pmkrs" (OuterVolumeSpecName: "kube-api-access-pmkrs") pod "df22a083-92d1-4897-967b-1f2f27c8b0e8" (UID: "df22a083-92d1-4897-967b-1f2f27c8b0e8"). InnerVolumeSpecName "kube-api-access-pmkrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.654565 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df22a083-92d1-4897-967b-1f2f27c8b0e8-config-data" (OuterVolumeSpecName: "config-data") pod "df22a083-92d1-4897-967b-1f2f27c8b0e8" (UID: "df22a083-92d1-4897-967b-1f2f27c8b0e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.666752 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df22a083-92d1-4897-967b-1f2f27c8b0e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df22a083-92d1-4897-967b-1f2f27c8b0e8" (UID: "df22a083-92d1-4897-967b-1f2f27c8b0e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.723623 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmkrs\" (UniqueName: \"kubernetes.io/projected/df22a083-92d1-4897-967b-1f2f27c8b0e8-kube-api-access-pmkrs\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.723670 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df22a083-92d1-4897-967b-1f2f27c8b0e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.723683 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df22a083-92d1-4897-967b-1f2f27c8b0e8-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.831424 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lh72w"] Nov 27 17:33:45 crc kubenswrapper[4792]: I1127 17:33:45.852449 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-lh72w"] Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.352347 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.434030 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b321af9-a492-4d9c-a75d-54987369d450","Type":"ContainerStarted","Data":"f3a9ba5ab5fedc51297b3d2e235d1537493ca02f0b3b94f4772dc29e6387cc19"} Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.434079 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b321af9-a492-4d9c-a75d-54987369d450","Type":"ContainerStarted","Data":"39af710a682891d4657beb8492d695ad1dee609908ddb244b1824a8d79d6c91b"} Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.441975 4792 generic.go:334] "Generic (PLEG): container finished" podID="c22b6e64-ba33-4da4-8c30-e202bace8c2e" containerID="645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85" exitCode=0 Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.442210 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.442234 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c22b6e64-ba33-4da4-8c30-e202bace8c2e","Type":"ContainerDied","Data":"645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85"} Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.444014 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c22b6e64-ba33-4da4-8c30-e202bace8c2e","Type":"ContainerDied","Data":"bc835934e007ae59f16bea473018f720d188414c7594505bc184db9d66061d59"} Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.444041 4792 scope.go:117] "RemoveContainer" containerID="645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.449559 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22b6e64-ba33-4da4-8c30-e202bace8c2e-config-data\") pod \"c22b6e64-ba33-4da4-8c30-e202bace8c2e\" (UID: \"c22b6e64-ba33-4da4-8c30-e202bace8c2e\") " Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.449718 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22b6e64-ba33-4da4-8c30-e202bace8c2e-combined-ca-bundle\") pod \"c22b6e64-ba33-4da4-8c30-e202bace8c2e\" (UID: \"c22b6e64-ba33-4da4-8c30-e202bace8c2e\") " Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.449825 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqvpw\" (UniqueName: \"kubernetes.io/projected/c22b6e64-ba33-4da4-8c30-e202bace8c2e-kube-api-access-jqvpw\") pod \"c22b6e64-ba33-4da4-8c30-e202bace8c2e\" (UID: \"c22b6e64-ba33-4da4-8c30-e202bace8c2e\") " Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.464297 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22b6e64-ba33-4da4-8c30-e202bace8c2e-kube-api-access-jqvpw" (OuterVolumeSpecName: "kube-api-access-jqvpw") pod "c22b6e64-ba33-4da4-8c30-e202bace8c2e" (UID: "c22b6e64-ba33-4da4-8c30-e202bace8c2e"). InnerVolumeSpecName "kube-api-access-jqvpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.470025 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.472046 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df22a083-92d1-4897-967b-1f2f27c8b0e8","Type":"ContainerDied","Data":"588f9a2e535e39062c24e181abbf37b146e477161a951c27aa862b43b402a940"} Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.492194 4792 scope.go:117] "RemoveContainer" containerID="645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.492387 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22b6e64-ba33-4da4-8c30-e202bace8c2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c22b6e64-ba33-4da4-8c30-e202bace8c2e" (UID: "c22b6e64-ba33-4da4-8c30-e202bace8c2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:46 crc kubenswrapper[4792]: E1127 17:33:46.492862 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85\": container with ID starting with 645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85 not found: ID does not exist" containerID="645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.492949 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85"} err="failed to get container status \"645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85\": rpc error: code = NotFound desc = could not find container \"645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85\": container with ID starting with 645038b550cb547137aabf68610867f59e913a422156f083c6bea6ecda192e85 not found: ID does not exist" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.492974 4792 scope.go:117] "RemoveContainer" containerID="da2bc00597d78bb98dd9ef8c1c198e0b33f47617ee826ea169e00f1863a39793" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.523695 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22b6e64-ba33-4da4-8c30-e202bace8c2e-config-data" (OuterVolumeSpecName: "config-data") pod "c22b6e64-ba33-4da4-8c30-e202bace8c2e" (UID: "c22b6e64-ba33-4da4-8c30-e202bace8c2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.535093 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.538556 4792 scope.go:117] "RemoveContainer" containerID="e7071f17acdbddd1c4f9602427bb391a5e2595bd0e62bf4c0b085dba683a7d18" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.553188 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22b6e64-ba33-4da4-8c30-e202bace8c2e-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.553223 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22b6e64-ba33-4da4-8c30-e202bace8c2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.553233 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqvpw\" (UniqueName: \"kubernetes.io/projected/c22b6e64-ba33-4da4-8c30-e202bace8c2e-kube-api-access-jqvpw\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.560347 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.582985 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 27 17:33:46 crc kubenswrapper[4792]: E1127 17:33:46.583403 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df22a083-92d1-4897-967b-1f2f27c8b0e8" containerName="nova-api-log" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.583418 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="df22a083-92d1-4897-967b-1f2f27c8b0e8" containerName="nova-api-log" Nov 27 17:33:46 crc kubenswrapper[4792]: E1127 17:33:46.583433 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c02eb12-16a2-4c2d-849f-0309fd114fd2" containerName="init" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.583440 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c02eb12-16a2-4c2d-849f-0309fd114fd2" containerName="init" Nov 27 17:33:46 crc kubenswrapper[4792]: E1127 17:33:46.583482 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22b6e64-ba33-4da4-8c30-e202bace8c2e" containerName="nova-scheduler-scheduler" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.583488 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22b6e64-ba33-4da4-8c30-e202bace8c2e" containerName="nova-scheduler-scheduler" Nov 27 17:33:46 crc kubenswrapper[4792]: E1127 17:33:46.583500 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df22a083-92d1-4897-967b-1f2f27c8b0e8" containerName="nova-api-api" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.583505 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="df22a083-92d1-4897-967b-1f2f27c8b0e8" containerName="nova-api-api" Nov 27 17:33:46 crc kubenswrapper[4792]: E1127 17:33:46.583515 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c02eb12-16a2-4c2d-849f-0309fd114fd2" containerName="dnsmasq-dns" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.583522 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c02eb12-16a2-4c2d-849f-0309fd114fd2" containerName="dnsmasq-dns" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.583721 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c02eb12-16a2-4c2d-849f-0309fd114fd2" containerName="dnsmasq-dns" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.583734 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="df22a083-92d1-4897-967b-1f2f27c8b0e8" containerName="nova-api-log" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.583749 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="df22a083-92d1-4897-967b-1f2f27c8b0e8" containerName="nova-api-api" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.583760 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22b6e64-ba33-4da4-8c30-e202bace8c2e" containerName="nova-scheduler-scheduler" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.584899 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.586979 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.598145 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.702692 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c02eb12-16a2-4c2d-849f-0309fd114fd2" path="/var/lib/kubelet/pods/9c02eb12-16a2-4c2d-849f-0309fd114fd2/volumes" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.703461 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df22a083-92d1-4897-967b-1f2f27c8b0e8" path="/var/lib/kubelet/pods/df22a083-92d1-4897-967b-1f2f27c8b0e8/volumes" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.760919 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9103aad5-7427-4713-8ee8-70ca12da3709-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " pod="openstack/nova-api-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.760987 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9103aad5-7427-4713-8ee8-70ca12da3709-config-data\") pod \"nova-api-0\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " pod="openstack/nova-api-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.761108 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9103aad5-7427-4713-8ee8-70ca12da3709-logs\") pod \"nova-api-0\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " pod="openstack/nova-api-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.761216 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2mxv\" (UniqueName: \"kubernetes.io/projected/9103aad5-7427-4713-8ee8-70ca12da3709-kube-api-access-n2mxv\") pod \"nova-api-0\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " pod="openstack/nova-api-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.782845 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.797278 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.815300 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.817109 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.821012 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.842059 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.862993 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9103aad5-7427-4713-8ee8-70ca12da3709-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " pod="openstack/nova-api-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.863048 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9103aad5-7427-4713-8ee8-70ca12da3709-config-data\") pod \"nova-api-0\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " pod="openstack/nova-api-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.863154 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9103aad5-7427-4713-8ee8-70ca12da3709-logs\") pod \"nova-api-0\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " pod="openstack/nova-api-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.863311 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2mxv\" (UniqueName: \"kubernetes.io/projected/9103aad5-7427-4713-8ee8-70ca12da3709-kube-api-access-n2mxv\") pod \"nova-api-0\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " pod="openstack/nova-api-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.863883 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9103aad5-7427-4713-8ee8-70ca12da3709-logs\") pod \"nova-api-0\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " pod="openstack/nova-api-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.867759 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9103aad5-7427-4713-8ee8-70ca12da3709-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " pod="openstack/nova-api-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.892482 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2mxv\" (UniqueName: \"kubernetes.io/projected/9103aad5-7427-4713-8ee8-70ca12da3709-kube-api-access-n2mxv\") pod \"nova-api-0\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " pod="openstack/nova-api-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.928470 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9103aad5-7427-4713-8ee8-70ca12da3709-config-data\") pod \"nova-api-0\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " pod="openstack/nova-api-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.987487 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13580cf5-25de-4e01-9443-26e5cdc7a50b-config-data\") pod \"nova-scheduler-0\" (UID: \"13580cf5-25de-4e01-9443-26e5cdc7a50b\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.988961 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13580cf5-25de-4e01-9443-26e5cdc7a50b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"13580cf5-25de-4e01-9443-26e5cdc7a50b\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:46 crc kubenswrapper[4792]: I1127 17:33:46.989039 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kvzr\" (UniqueName: \"kubernetes.io/projected/13580cf5-25de-4e01-9443-26e5cdc7a50b-kube-api-access-9kvzr\") pod \"nova-scheduler-0\" (UID: \"13580cf5-25de-4e01-9443-26e5cdc7a50b\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.091628 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13580cf5-25de-4e01-9443-26e5cdc7a50b-config-data\") pod \"nova-scheduler-0\" (UID: \"13580cf5-25de-4e01-9443-26e5cdc7a50b\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.091815 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13580cf5-25de-4e01-9443-26e5cdc7a50b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"13580cf5-25de-4e01-9443-26e5cdc7a50b\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.091869 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kvzr\" (UniqueName: \"kubernetes.io/projected/13580cf5-25de-4e01-9443-26e5cdc7a50b-kube-api-access-9kvzr\") pod \"nova-scheduler-0\" (UID: \"13580cf5-25de-4e01-9443-26e5cdc7a50b\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.102263 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13580cf5-25de-4e01-9443-26e5cdc7a50b-config-data\") pod \"nova-scheduler-0\" (UID: \"13580cf5-25de-4e01-9443-26e5cdc7a50b\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.105425 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13580cf5-25de-4e01-9443-26e5cdc7a50b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"13580cf5-25de-4e01-9443-26e5cdc7a50b\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.111285 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kvzr\" (UniqueName: \"kubernetes.io/projected/13580cf5-25de-4e01-9443-26e5cdc7a50b-kube-api-access-9kvzr\") pod \"nova-scheduler-0\" (UID: \"13580cf5-25de-4e01-9443-26e5cdc7a50b\") " pod="openstack/nova-scheduler-0" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.133002 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7573-account-create-update-xsrqm" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.138957 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-n4j6s" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.143591 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.225672 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.298970 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4dmr\" (UniqueName: \"kubernetes.io/projected/c4be33e9-5259-40d3-9496-c1836cb67060-kube-api-access-w4dmr\") pod \"c4be33e9-5259-40d3-9496-c1836cb67060\" (UID: \"c4be33e9-5259-40d3-9496-c1836cb67060\") " Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.299364 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfmkh\" (UniqueName: \"kubernetes.io/projected/0eacffe2-c3ab-4816-9d52-3c73de2d37cf-kube-api-access-zfmkh\") pod \"0eacffe2-c3ab-4816-9d52-3c73de2d37cf\" (UID: \"0eacffe2-c3ab-4816-9d52-3c73de2d37cf\") " Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.299396 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eacffe2-c3ab-4816-9d52-3c73de2d37cf-operator-scripts\") pod \"0eacffe2-c3ab-4816-9d52-3c73de2d37cf\" (UID: \"0eacffe2-c3ab-4816-9d52-3c73de2d37cf\") " Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.299498 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4be33e9-5259-40d3-9496-c1836cb67060-operator-scripts\") pod \"c4be33e9-5259-40d3-9496-c1836cb67060\" (UID: \"c4be33e9-5259-40d3-9496-c1836cb67060\") " Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.301141 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eacffe2-c3ab-4816-9d52-3c73de2d37cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0eacffe2-c3ab-4816-9d52-3c73de2d37cf" (UID: "0eacffe2-c3ab-4816-9d52-3c73de2d37cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.301295 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4be33e9-5259-40d3-9496-c1836cb67060-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4be33e9-5259-40d3-9496-c1836cb67060" (UID: "c4be33e9-5259-40d3-9496-c1836cb67060"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.318366 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4be33e9-5259-40d3-9496-c1836cb67060-kube-api-access-w4dmr" (OuterVolumeSpecName: "kube-api-access-w4dmr") pod "c4be33e9-5259-40d3-9496-c1836cb67060" (UID: "c4be33e9-5259-40d3-9496-c1836cb67060"). InnerVolumeSpecName "kube-api-access-w4dmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.320883 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eacffe2-c3ab-4816-9d52-3c73de2d37cf-kube-api-access-zfmkh" (OuterVolumeSpecName: "kube-api-access-zfmkh") pod "0eacffe2-c3ab-4816-9d52-3c73de2d37cf" (UID: "0eacffe2-c3ab-4816-9d52-3c73de2d37cf"). InnerVolumeSpecName "kube-api-access-zfmkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.405521 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfmkh\" (UniqueName: \"kubernetes.io/projected/0eacffe2-c3ab-4816-9d52-3c73de2d37cf-kube-api-access-zfmkh\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.405560 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eacffe2-c3ab-4816-9d52-3c73de2d37cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.405569 4792 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4be33e9-5259-40d3-9496-c1836cb67060-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.405577 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4dmr\" (UniqueName: \"kubernetes.io/projected/c4be33e9-5259-40d3-9496-c1836cb67060-kube-api-access-w4dmr\") on node \"crc\" DevicePath \"\"" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.491001 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7573-account-create-update-xsrqm" event={"ID":"0eacffe2-c3ab-4816-9d52-3c73de2d37cf","Type":"ContainerDied","Data":"40dad975c511e475a2675dd080041bc2586f96767d85b003044b853c61ef2304"} Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.491040 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40dad975c511e475a2675dd080041bc2586f96767d85b003044b853c61ef2304" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.491111 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7573-account-create-update-xsrqm" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.507506 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b321af9-a492-4d9c-a75d-54987369d450","Type":"ContainerStarted","Data":"11a8def03706eef57a188d519eeda1c26497bfb817c4d8204861b066a7de8b34"} Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.524177 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-n4j6s" event={"ID":"c4be33e9-5259-40d3-9496-c1836cb67060","Type":"ContainerDied","Data":"0f4226ec5d300c9d7aed1ad4559e6a4665118e0c382fa0aada5baa5610dd9601"} Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.524406 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f4226ec5d300c9d7aed1ad4559e6a4665118e0c382fa0aada5baa5610dd9601" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.524511 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-n4j6s" Nov 27 17:33:47 crc kubenswrapper[4792]: W1127 17:33:47.797403 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13580cf5_25de_4e01_9443_26e5cdc7a50b.slice/crio-82ea7adedf73565c3e86f753fe4934605c5f3108d3b08cabb620cd857746901a WatchSource:0}: Error finding container 82ea7adedf73565c3e86f753fe4934605c5f3108d3b08cabb620cd857746901a: Status 404 returned error can't find the container with id 82ea7adedf73565c3e86f753fe4934605c5f3108d3b08cabb620cd857746901a Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.798040 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.810808 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.810854 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 27 17:33:47 crc kubenswrapper[4792]: I1127 17:33:47.875088 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:33:48 crc kubenswrapper[4792]: I1127 17:33:48.549139 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9103aad5-7427-4713-8ee8-70ca12da3709","Type":"ContainerStarted","Data":"8049dff3801bd1eaf6bde9b6b1b1bcc8b00924a76eb88cd301da798e40fd69e1"} Nov 27 17:33:48 crc kubenswrapper[4792]: I1127 17:33:48.549505 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9103aad5-7427-4713-8ee8-70ca12da3709","Type":"ContainerStarted","Data":"c07f0b6adc17f1d90ff26b07989d5bde3f7e1fdb88e6b2313c37b80a0585bd80"} Nov 27 17:33:48 crc kubenswrapper[4792]: I1127 17:33:48.555514 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"13580cf5-25de-4e01-9443-26e5cdc7a50b","Type":"ContainerStarted","Data":"1fba22a6ee53f985d6b1ec3a9058c768c3376907d7c7be9930c90c2f84bfb034"} Nov 27 17:33:48 crc kubenswrapper[4792]: I1127 17:33:48.555563 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"13580cf5-25de-4e01-9443-26e5cdc7a50b","Type":"ContainerStarted","Data":"82ea7adedf73565c3e86f753fe4934605c5f3108d3b08cabb620cd857746901a"} Nov 27 17:33:48 crc kubenswrapper[4792]: I1127 17:33:48.583885 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.5838585370000002 podStartE2EDuration="2.583858537s" podCreationTimestamp="2025-11-27 17:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:48.570176067 +0000 UTC m=+1450.913002385" watchObservedRunningTime="2025-11-27 17:33:48.583858537 +0000 UTC m=+1450.926684855" Nov 27 17:33:48 crc kubenswrapper[4792]: I1127 17:33:48.702919 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c22b6e64-ba33-4da4-8c30-e202bace8c2e" path="/var/lib/kubelet/pods/c22b6e64-ba33-4da4-8c30-e202bace8c2e/volumes" Nov 27 17:33:49 crc kubenswrapper[4792]: I1127 17:33:49.569882 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b321af9-a492-4d9c-a75d-54987369d450","Type":"ContainerStarted","Data":"91c3a1fcb654de418ffea38f528a56fe1af92b4aa9da4495418b3f0a573ed63f"} Nov 27 17:33:49 crc kubenswrapper[4792]: I1127 17:33:49.570566 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 27 17:33:49 crc kubenswrapper[4792]: I1127 17:33:49.575726 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9103aad5-7427-4713-8ee8-70ca12da3709","Type":"ContainerStarted","Data":"f133f13642d1b7eaac427d0102f924fbc6a5eab0a0bb64acc9bc9a2607cb6429"} Nov 27 17:33:49 crc kubenswrapper[4792]: I1127 17:33:49.616496 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.430457386 podStartE2EDuration="6.61647172s" podCreationTimestamp="2025-11-27 17:33:43 +0000 UTC" firstStartedPulling="2025-11-27 17:33:44.549385655 +0000 UTC m=+1446.892211963" lastFinishedPulling="2025-11-27 17:33:48.735399969 +0000 UTC m=+1451.078226297" observedRunningTime="2025-11-27 17:33:49.5975747 +0000 UTC m=+1451.940401048" watchObservedRunningTime="2025-11-27 17:33:49.61647172 +0000 UTC m=+1451.959298078" Nov 27 17:33:49 crc kubenswrapper[4792]: I1127 17:33:49.645951 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.645923003 podStartE2EDuration="3.645923003s" podCreationTimestamp="2025-11-27 17:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:33:49.62088397 +0000 UTC m=+1451.963710288" watchObservedRunningTime="2025-11-27 17:33:49.645923003 +0000 UTC m=+1451.988749361" Nov 27 17:33:52 crc kubenswrapper[4792]: I1127 17:33:52.144483 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 27 17:33:52 crc kubenswrapper[4792]: I1127 17:33:52.716104 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 27 17:33:52 crc kubenswrapper[4792]: I1127 17:33:52.810004 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 27 17:33:52 crc kubenswrapper[4792]: I1127 17:33:52.810072 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.230789 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-jq7k7"] Nov 27 17:33:53 crc kubenswrapper[4792]: E1127 17:33:53.231349 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4be33e9-5259-40d3-9496-c1836cb67060" containerName="mariadb-database-create" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.231364 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4be33e9-5259-40d3-9496-c1836cb67060" containerName="mariadb-database-create" Nov 27 17:33:53 crc kubenswrapper[4792]: E1127 17:33:53.231377 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eacffe2-c3ab-4816-9d52-3c73de2d37cf" containerName="mariadb-account-create-update" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.231382 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eacffe2-c3ab-4816-9d52-3c73de2d37cf" containerName="mariadb-account-create-update" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.231632 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eacffe2-c3ab-4816-9d52-3c73de2d37cf" containerName="mariadb-account-create-update" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.231659 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4be33e9-5259-40d3-9496-c1836cb67060" containerName="mariadb-database-create" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.232476 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.264174 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-vlns7" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.264885 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-jq7k7"] Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.269022 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.269274 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.269511 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.346438 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwwth\" (UniqueName: \"kubernetes.io/projected/4f088e56-5dc8-4c86-b0f8-69ba476e721f-kube-api-access-lwwth\") pod \"aodh-db-sync-jq7k7\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.346600 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-scripts\") pod \"aodh-db-sync-jq7k7\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.346630 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-combined-ca-bundle\") pod \"aodh-db-sync-jq7k7\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.346691 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-config-data\") pod \"aodh-db-sync-jq7k7\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.449212 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-scripts\") pod \"aodh-db-sync-jq7k7\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.449282 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-combined-ca-bundle\") pod \"aodh-db-sync-jq7k7\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.449331 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-config-data\") pod \"aodh-db-sync-jq7k7\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.449415 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwwth\" (UniqueName: \"kubernetes.io/projected/4f088e56-5dc8-4c86-b0f8-69ba476e721f-kube-api-access-lwwth\") pod \"aodh-db-sync-jq7k7\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.457850 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-config-data\") pod \"aodh-db-sync-jq7k7\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.457893 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-scripts\") pod \"aodh-db-sync-jq7k7\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.461343 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-combined-ca-bundle\") pod \"aodh-db-sync-jq7k7\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.467328 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwwth\" (UniqueName: \"kubernetes.io/projected/4f088e56-5dc8-4c86-b0f8-69ba476e721f-kube-api-access-lwwth\") pod \"aodh-db-sync-jq7k7\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.592053 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.828823 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.239:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 27 17:33:53 crc kubenswrapper[4792]: I1127 17:33:53.828915 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.239:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 27 17:33:54 crc kubenswrapper[4792]: I1127 17:33:54.180932 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-jq7k7"] Nov 27 17:33:54 crc kubenswrapper[4792]: W1127 17:33:54.181183 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f088e56_5dc8_4c86_b0f8_69ba476e721f.slice/crio-6f47376a5cc9f01d694a97a0a2bdf2ddacbbe9582e44680b8a5d40d54918e4b2 WatchSource:0}: Error finding container 6f47376a5cc9f01d694a97a0a2bdf2ddacbbe9582e44680b8a5d40d54918e4b2: Status 404 returned error can't find the container with id 6f47376a5cc9f01d694a97a0a2bdf2ddacbbe9582e44680b8a5d40d54918e4b2 Nov 27 17:33:54 crc kubenswrapper[4792]: I1127 17:33:54.673478 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jq7k7" event={"ID":"4f088e56-5dc8-4c86-b0f8-69ba476e721f","Type":"ContainerStarted","Data":"6f47376a5cc9f01d694a97a0a2bdf2ddacbbe9582e44680b8a5d40d54918e4b2"} Nov 27 17:33:57 crc kubenswrapper[4792]: I1127 17:33:57.144271 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 27 17:33:57 crc kubenswrapper[4792]: I1127 17:33:57.185445 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 27 17:33:57 crc kubenswrapper[4792]: I1127 17:33:57.226508 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 27 17:33:57 crc kubenswrapper[4792]: I1127 17:33:57.226556 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 27 17:33:57 crc kubenswrapper[4792]: I1127 17:33:57.754051 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 27 17:33:58 crc kubenswrapper[4792]: I1127 17:33:58.309848 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9103aad5-7427-4713-8ee8-70ca12da3709" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.243:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 27 17:33:58 crc kubenswrapper[4792]: I1127 17:33:58.309906 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9103aad5-7427-4713-8ee8-70ca12da3709" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.243:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 27 17:34:00 crc kubenswrapper[4792]: I1127 17:34:00.777239 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jq7k7" event={"ID":"4f088e56-5dc8-4c86-b0f8-69ba476e721f","Type":"ContainerStarted","Data":"5d7d7a8fb0e99d62d68cd6ee9bbef4bff13f2fea7ec50c92e54379e9a7c68d09"} Nov 27 17:34:00 crc kubenswrapper[4792]: I1127 17:34:00.810232 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-jq7k7" podStartSLOduration=2.18061249 podStartE2EDuration="7.810208497s" podCreationTimestamp="2025-11-27 17:33:53 +0000 UTC" firstStartedPulling="2025-11-27 17:33:54.183251394 +0000 UTC m=+1456.526077712" lastFinishedPulling="2025-11-27 17:33:59.812847401 +0000 UTC m=+1462.155673719" observedRunningTime="2025-11-27 17:34:00.798365722 +0000 UTC m=+1463.141192060" watchObservedRunningTime="2025-11-27 17:34:00.810208497 +0000 UTC m=+1463.153034835" Nov 27 17:34:02 crc kubenswrapper[4792]: E1127 17:34:02.283684 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f088e56_5dc8_4c86_b0f8_69ba476e721f.slice/crio-5d7d7a8fb0e99d62d68cd6ee9bbef4bff13f2fea7ec50c92e54379e9a7c68d09.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f088e56_5dc8_4c86_b0f8_69ba476e721f.slice/crio-conmon-5d7d7a8fb0e99d62d68cd6ee9bbef4bff13f2fea7ec50c92e54379e9a7c68d09.scope\": RecentStats: unable to find data in memory cache]" Nov 27 17:34:02 crc kubenswrapper[4792]: I1127 17:34:02.818042 4792 generic.go:334] "Generic (PLEG): container finished" podID="4f088e56-5dc8-4c86-b0f8-69ba476e721f" containerID="5d7d7a8fb0e99d62d68cd6ee9bbef4bff13f2fea7ec50c92e54379e9a7c68d09" exitCode=0 Nov 27 17:34:02 crc kubenswrapper[4792]: I1127 17:34:02.818112 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jq7k7" event={"ID":"4f088e56-5dc8-4c86-b0f8-69ba476e721f","Type":"ContainerDied","Data":"5d7d7a8fb0e99d62d68cd6ee9bbef4bff13f2fea7ec50c92e54379e9a7c68d09"} Nov 27 17:34:02 crc kubenswrapper[4792]: I1127 17:34:02.831267 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 27 17:34:02 crc kubenswrapper[4792]: I1127 17:34:02.839638 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 27 17:34:02 crc kubenswrapper[4792]: I1127 17:34:02.846691 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 27 17:34:03 crc kubenswrapper[4792]: I1127 17:34:03.909854 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.366496 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.439731 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-config-data\") pod \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.439831 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-combined-ca-bundle\") pod \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.439940 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-scripts\") pod \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.439978 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwwth\" (UniqueName: \"kubernetes.io/projected/4f088e56-5dc8-4c86-b0f8-69ba476e721f-kube-api-access-lwwth\") pod \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\" (UID: \"4f088e56-5dc8-4c86-b0f8-69ba476e721f\") " Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.445915 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-scripts" (OuterVolumeSpecName: "scripts") pod "4f088e56-5dc8-4c86-b0f8-69ba476e721f" (UID: "4f088e56-5dc8-4c86-b0f8-69ba476e721f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.446933 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f088e56-5dc8-4c86-b0f8-69ba476e721f-kube-api-access-lwwth" (OuterVolumeSpecName: "kube-api-access-lwwth") pod "4f088e56-5dc8-4c86-b0f8-69ba476e721f" (UID: "4f088e56-5dc8-4c86-b0f8-69ba476e721f"). InnerVolumeSpecName "kube-api-access-lwwth". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.472921 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-config-data" (OuterVolumeSpecName: "config-data") pod "4f088e56-5dc8-4c86-b0f8-69ba476e721f" (UID: "4f088e56-5dc8-4c86-b0f8-69ba476e721f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.495920 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f088e56-5dc8-4c86-b0f8-69ba476e721f" (UID: "4f088e56-5dc8-4c86-b0f8-69ba476e721f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.543475 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.543512 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwwth\" (UniqueName: \"kubernetes.io/projected/4f088e56-5dc8-4c86-b0f8-69ba476e721f-kube-api-access-lwwth\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.543563 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.543578 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f088e56-5dc8-4c86-b0f8-69ba476e721f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.862880 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jq7k7" Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.862861 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jq7k7" event={"ID":"4f088e56-5dc8-4c86-b0f8-69ba476e721f","Type":"ContainerDied","Data":"6f47376a5cc9f01d694a97a0a2bdf2ddacbbe9582e44680b8a5d40d54918e4b2"} Nov 27 17:34:04 crc kubenswrapper[4792]: I1127 17:34:04.862935 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f47376a5cc9f01d694a97a0a2bdf2ddacbbe9582e44680b8a5d40d54918e4b2" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.634836 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.678797 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52234aa9-87fd-45a8-9c3c-914366e1bbbd-combined-ca-bundle\") pod \"52234aa9-87fd-45a8-9c3c-914366e1bbbd\" (UID: \"52234aa9-87fd-45a8-9c3c-914366e1bbbd\") " Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.679079 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mtvf\" (UniqueName: \"kubernetes.io/projected/52234aa9-87fd-45a8-9c3c-914366e1bbbd-kube-api-access-2mtvf\") pod \"52234aa9-87fd-45a8-9c3c-914366e1bbbd\" (UID: \"52234aa9-87fd-45a8-9c3c-914366e1bbbd\") " Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.679222 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52234aa9-87fd-45a8-9c3c-914366e1bbbd-config-data\") pod \"52234aa9-87fd-45a8-9c3c-914366e1bbbd\" (UID: \"52234aa9-87fd-45a8-9c3c-914366e1bbbd\") " Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.684317 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52234aa9-87fd-45a8-9c3c-914366e1bbbd-kube-api-access-2mtvf" (OuterVolumeSpecName: "kube-api-access-2mtvf") pod "52234aa9-87fd-45a8-9c3c-914366e1bbbd" (UID: "52234aa9-87fd-45a8-9c3c-914366e1bbbd"). InnerVolumeSpecName "kube-api-access-2mtvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.719472 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52234aa9-87fd-45a8-9c3c-914366e1bbbd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52234aa9-87fd-45a8-9c3c-914366e1bbbd" (UID: "52234aa9-87fd-45a8-9c3c-914366e1bbbd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.722016 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52234aa9-87fd-45a8-9c3c-914366e1bbbd-config-data" (OuterVolumeSpecName: "config-data") pod "52234aa9-87fd-45a8-9c3c-914366e1bbbd" (UID: "52234aa9-87fd-45a8-9c3c-914366e1bbbd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.782975 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52234aa9-87fd-45a8-9c3c-914366e1bbbd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.783017 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mtvf\" (UniqueName: \"kubernetes.io/projected/52234aa9-87fd-45a8-9c3c-914366e1bbbd-kube-api-access-2mtvf\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.783030 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52234aa9-87fd-45a8-9c3c-914366e1bbbd-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.883724 4792 generic.go:334] "Generic (PLEG): container finished" podID="52234aa9-87fd-45a8-9c3c-914366e1bbbd" containerID="aad9a80612b9e4d7e9b1537bb435d99a82170632cfadaf155f7ad667eb026148" exitCode=137 Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.883773 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"52234aa9-87fd-45a8-9c3c-914366e1bbbd","Type":"ContainerDied","Data":"aad9a80612b9e4d7e9b1537bb435d99a82170632cfadaf155f7ad667eb026148"} Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.883802 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.883834 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"52234aa9-87fd-45a8-9c3c-914366e1bbbd","Type":"ContainerDied","Data":"6aef16297ce737efa19b4234a7fa2129a5805fc683362ab51a65eb150ea43987"} Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.883857 4792 scope.go:117] "RemoveContainer" containerID="aad9a80612b9e4d7e9b1537bb435d99a82170632cfadaf155f7ad667eb026148" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.916832 4792 scope.go:117] "RemoveContainer" containerID="aad9a80612b9e4d7e9b1537bb435d99a82170632cfadaf155f7ad667eb026148" Nov 27 17:34:05 crc kubenswrapper[4792]: E1127 17:34:05.917343 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aad9a80612b9e4d7e9b1537bb435d99a82170632cfadaf155f7ad667eb026148\": container with ID starting with aad9a80612b9e4d7e9b1537bb435d99a82170632cfadaf155f7ad667eb026148 not found: ID does not exist" containerID="aad9a80612b9e4d7e9b1537bb435d99a82170632cfadaf155f7ad667eb026148" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.917385 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad9a80612b9e4d7e9b1537bb435d99a82170632cfadaf155f7ad667eb026148"} err="failed to get container status \"aad9a80612b9e4d7e9b1537bb435d99a82170632cfadaf155f7ad667eb026148\": rpc error: code = NotFound desc = could not find container \"aad9a80612b9e4d7e9b1537bb435d99a82170632cfadaf155f7ad667eb026148\": container with ID starting with aad9a80612b9e4d7e9b1537bb435d99a82170632cfadaf155f7ad667eb026148 not found: ID does not exist" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.934468 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.949193 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.968174 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 27 17:34:05 crc kubenswrapper[4792]: E1127 17:34:05.968712 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52234aa9-87fd-45a8-9c3c-914366e1bbbd" containerName="nova-cell1-novncproxy-novncproxy" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.968775 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="52234aa9-87fd-45a8-9c3c-914366e1bbbd" containerName="nova-cell1-novncproxy-novncproxy" Nov 27 17:34:05 crc kubenswrapper[4792]: E1127 17:34:05.968825 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f088e56-5dc8-4c86-b0f8-69ba476e721f" containerName="aodh-db-sync" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.968832 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f088e56-5dc8-4c86-b0f8-69ba476e721f" containerName="aodh-db-sync" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.969076 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="52234aa9-87fd-45a8-9c3c-914366e1bbbd" containerName="nova-cell1-novncproxy-novncproxy" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.969095 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f088e56-5dc8-4c86-b0f8-69ba476e721f" containerName="aodh-db-sync" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.979683 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.981463 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.983101 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 27 17:34:05 crc kubenswrapper[4792]: I1127 17:34:05.985691 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.001010 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.090065 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.090163 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.090372 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.090453 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.090616 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r46xt\" (UniqueName: \"kubernetes.io/projected/0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd-kube-api-access-r46xt\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.192796 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r46xt\" (UniqueName: \"kubernetes.io/projected/0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd-kube-api-access-r46xt\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.192923 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.192953 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.193488 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.193534 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.196870 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.197339 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.197607 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.198064 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.215777 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r46xt\" (UniqueName: \"kubernetes.io/projected/0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd-kube-api-access-r46xt\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.309105 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.703180 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52234aa9-87fd-45a8-9c3c-914366e1bbbd" path="/var/lib/kubelet/pods/52234aa9-87fd-45a8-9c3c-914366e1bbbd/volumes" Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.823950 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 27 17:34:06 crc kubenswrapper[4792]: I1127 17:34:06.900437 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd","Type":"ContainerStarted","Data":"cbe893c98376094dafb1b6ebb7b5754e50642b83ab44a9faa757dcf8f4fa42a7"} Nov 27 17:34:07 crc kubenswrapper[4792]: I1127 17:34:07.230775 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 27 17:34:07 crc kubenswrapper[4792]: I1127 17:34:07.231402 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 27 17:34:07 crc kubenswrapper[4792]: I1127 17:34:07.231436 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 27 17:34:07 crc kubenswrapper[4792]: I1127 17:34:07.235362 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 27 17:34:07 crc kubenswrapper[4792]: I1127 17:34:07.936612 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 27 17:34:07 crc kubenswrapper[4792]: I1127 17:34:07.942725 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 27 17:34:07 crc kubenswrapper[4792]: I1127 17:34:07.948058 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 27 17:34:07 crc kubenswrapper[4792]: I1127 17:34:07.948208 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 27 17:34:07 crc kubenswrapper[4792]: I1127 17:34:07.948368 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-vlns7" Nov 27 17:34:07 crc kubenswrapper[4792]: I1127 17:34:07.955039 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd","Type":"ContainerStarted","Data":"5a0950a7369c82fd352318b3aeaa2a508315d313f1954ba3b3675aa005007f9e"} Nov 27 17:34:07 crc kubenswrapper[4792]: I1127 17:34:07.955236 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 27 17:34:07 crc kubenswrapper[4792]: I1127 17:34:07.955303 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 27 17:34:07 crc kubenswrapper[4792]: I1127 17:34:07.993092 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.006848 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.00682902 podStartE2EDuration="3.00682902s" podCreationTimestamp="2025-11-27 17:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:34:07.98712679 +0000 UTC m=+1470.329953118" watchObservedRunningTime="2025-11-27 17:34:08.00682902 +0000 UTC m=+1470.349655338" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.042086 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-combined-ca-bundle\") pod \"aodh-0\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " pod="openstack/aodh-0" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.042228 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-config-data\") pod \"aodh-0\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " pod="openstack/aodh-0" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.042354 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mf4h\" (UniqueName: \"kubernetes.io/projected/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-kube-api-access-7mf4h\") pod \"aodh-0\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " pod="openstack/aodh-0" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.042388 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-scripts\") pod \"aodh-0\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " pod="openstack/aodh-0" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.144036 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-combined-ca-bundle\") pod \"aodh-0\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " pod="openstack/aodh-0" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.144119 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-config-data\") pod \"aodh-0\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " pod="openstack/aodh-0" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.144213 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mf4h\" (UniqueName: \"kubernetes.io/projected/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-kube-api-access-7mf4h\") pod \"aodh-0\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " pod="openstack/aodh-0" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.144241 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-scripts\") pod \"aodh-0\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " pod="openstack/aodh-0" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.163855 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-combined-ca-bundle\") pod \"aodh-0\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " pod="openstack/aodh-0" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.167594 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-config-data\") pod \"aodh-0\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " pod="openstack/aodh-0" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.170408 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-scripts\") pod \"aodh-0\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " pod="openstack/aodh-0" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.180332 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-227hq"] Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.193230 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.200146 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-227hq"] Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.203761 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mf4h\" (UniqueName: \"kubernetes.io/projected/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-kube-api-access-7mf4h\") pod \"aodh-0\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " pod="openstack/aodh-0" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.246357 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.246424 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-config\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.246471 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.246520 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.246599 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.246613 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlxct\" (UniqueName: \"kubernetes.io/projected/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-kube-api-access-wlxct\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.279339 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.291006 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.291105 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.348035 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.348828 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.348949 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-config\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.349492 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-config\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.349539 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.350077 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.351506 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.351683 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.351710 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlxct\" (UniqueName: \"kubernetes.io/projected/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-kube-api-access-wlxct\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.352487 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.352580 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.387868 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlxct\" (UniqueName: \"kubernetes.io/projected/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-kube-api-access-wlxct\") pod \"dnsmasq-dns-6d99f6bc7f-227hq\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.591952 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.900873 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 27 17:34:08 crc kubenswrapper[4792]: W1127 17:34:08.904329 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46da3dec_b250_40ad_98d5_5c0e81cc9fb2.slice/crio-275859698fd64211e920b1cda83d4bf2cedac266e6022c7192b1755d6253b453 WatchSource:0}: Error finding container 275859698fd64211e920b1cda83d4bf2cedac266e6022c7192b1755d6253b453: Status 404 returned error can't find the container with id 275859698fd64211e920b1cda83d4bf2cedac266e6022c7192b1755d6253b453 Nov 27 17:34:08 crc kubenswrapper[4792]: I1127 17:34:08.972561 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"46da3dec-b250-40ad-98d5-5c0e81cc9fb2","Type":"ContainerStarted","Data":"275859698fd64211e920b1cda83d4bf2cedac266e6022c7192b1755d6253b453"} Nov 27 17:34:09 crc kubenswrapper[4792]: I1127 17:34:09.131316 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-227hq"] Nov 27 17:34:09 crc kubenswrapper[4792]: I1127 17:34:09.983182 4792 generic.go:334] "Generic (PLEG): container finished" podID="c7d684b6-6b54-4fea-86da-6b6266a2c2eb" containerID="6d85a66371eb30fa40a2c02ef5a53bfa0fab3954daef3d3c6af22570ddce5af1" exitCode=0 Nov 27 17:34:09 crc kubenswrapper[4792]: I1127 17:34:09.983319 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" event={"ID":"c7d684b6-6b54-4fea-86da-6b6266a2c2eb","Type":"ContainerDied","Data":"6d85a66371eb30fa40a2c02ef5a53bfa0fab3954daef3d3c6af22570ddce5af1"} Nov 27 17:34:09 crc kubenswrapper[4792]: I1127 17:34:09.983442 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" event={"ID":"c7d684b6-6b54-4fea-86da-6b6266a2c2eb","Type":"ContainerStarted","Data":"1af6b3ac301a94864a01ea1fbf33d0203c939bf2ccf467b1a1ffc0be49075700"} Nov 27 17:34:09 crc kubenswrapper[4792]: I1127 17:34:09.985628 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"46da3dec-b250-40ad-98d5-5c0e81cc9fb2","Type":"ContainerStarted","Data":"b54e2cd8f75c9e2331dd373d6fa3791552e7ab84664d71b6e0c7a88eba36c43f"} Nov 27 17:34:10 crc kubenswrapper[4792]: I1127 17:34:10.603567 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:34:10 crc kubenswrapper[4792]: I1127 17:34:10.997168 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9103aad5-7427-4713-8ee8-70ca12da3709" containerName="nova-api-log" containerID="cri-o://8049dff3801bd1eaf6bde9b6b1b1bcc8b00924a76eb88cd301da798e40fd69e1" gracePeriod=30 Nov 27 17:34:10 crc kubenswrapper[4792]: I1127 17:34:10.998200 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" event={"ID":"c7d684b6-6b54-4fea-86da-6b6266a2c2eb","Type":"ContainerStarted","Data":"8ec4922a35967ec6f574cb50e8fcbc84edc9a606eba95564c2cd9325610636aa"} Nov 27 17:34:10 crc kubenswrapper[4792]: I1127 17:34:10.998233 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:10 crc kubenswrapper[4792]: I1127 17:34:10.998563 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9103aad5-7427-4713-8ee8-70ca12da3709" containerName="nova-api-api" containerID="cri-o://f133f13642d1b7eaac427d0102f924fbc6a5eab0a0bb64acc9bc9a2607cb6429" gracePeriod=30 Nov 27 17:34:11 crc kubenswrapper[4792]: I1127 17:34:11.023411 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" podStartSLOduration=3.023390066 podStartE2EDuration="3.023390066s" podCreationTimestamp="2025-11-27 17:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:34:11.013065559 +0000 UTC m=+1473.355891877" watchObservedRunningTime="2025-11-27 17:34:11.023390066 +0000 UTC m=+1473.366216384" Nov 27 17:34:11 crc kubenswrapper[4792]: I1127 17:34:11.055208 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:34:11 crc kubenswrapper[4792]: I1127 17:34:11.055543 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="ceilometer-central-agent" containerID="cri-o://39af710a682891d4657beb8492d695ad1dee609908ddb244b1824a8d79d6c91b" gracePeriod=30 Nov 27 17:34:11 crc kubenswrapper[4792]: I1127 17:34:11.056112 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="sg-core" containerID="cri-o://11a8def03706eef57a188d519eeda1c26497bfb817c4d8204861b066a7de8b34" gracePeriod=30 Nov 27 17:34:11 crc kubenswrapper[4792]: I1127 17:34:11.056223 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="ceilometer-notification-agent" containerID="cri-o://f3a9ba5ab5fedc51297b3d2e235d1537493ca02f0b3b94f4772dc29e6387cc19" gracePeriod=30 Nov 27 17:34:11 crc kubenswrapper[4792]: I1127 17:34:11.056256 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="proxy-httpd" containerID="cri-o://91c3a1fcb654de418ffea38f528a56fe1af92b4aa9da4495418b3f0a573ed63f" gracePeriod=30 Nov 27 17:34:11 crc kubenswrapper[4792]: I1127 17:34:11.070340 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 27 17:34:11 crc kubenswrapper[4792]: I1127 17:34:11.309695 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:12 crc kubenswrapper[4792]: I1127 17:34:12.009289 4792 generic.go:334] "Generic (PLEG): container finished" podID="9103aad5-7427-4713-8ee8-70ca12da3709" containerID="8049dff3801bd1eaf6bde9b6b1b1bcc8b00924a76eb88cd301da798e40fd69e1" exitCode=143 Nov 27 17:34:12 crc kubenswrapper[4792]: I1127 17:34:12.009356 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9103aad5-7427-4713-8ee8-70ca12da3709","Type":"ContainerDied","Data":"8049dff3801bd1eaf6bde9b6b1b1bcc8b00924a76eb88cd301da798e40fd69e1"} Nov 27 17:34:12 crc kubenswrapper[4792]: I1127 17:34:12.012756 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"46da3dec-b250-40ad-98d5-5c0e81cc9fb2","Type":"ContainerStarted","Data":"872db1a4db784269628c1c64cc0d40bc796087d52f82e17a5b059fd26b1435f9"} Nov 27 17:34:12 crc kubenswrapper[4792]: I1127 17:34:12.015522 4792 generic.go:334] "Generic (PLEG): container finished" podID="4b321af9-a492-4d9c-a75d-54987369d450" containerID="91c3a1fcb654de418ffea38f528a56fe1af92b4aa9da4495418b3f0a573ed63f" exitCode=0 Nov 27 17:34:12 crc kubenswrapper[4792]: I1127 17:34:12.015539 4792 generic.go:334] "Generic (PLEG): container finished" podID="4b321af9-a492-4d9c-a75d-54987369d450" containerID="11a8def03706eef57a188d519eeda1c26497bfb817c4d8204861b066a7de8b34" exitCode=2 Nov 27 17:34:12 crc kubenswrapper[4792]: I1127 17:34:12.015546 4792 generic.go:334] "Generic (PLEG): container finished" podID="4b321af9-a492-4d9c-a75d-54987369d450" containerID="39af710a682891d4657beb8492d695ad1dee609908ddb244b1824a8d79d6c91b" exitCode=0 Nov 27 17:34:12 crc kubenswrapper[4792]: I1127 17:34:12.015880 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b321af9-a492-4d9c-a75d-54987369d450","Type":"ContainerDied","Data":"91c3a1fcb654de418ffea38f528a56fe1af92b4aa9da4495418b3f0a573ed63f"} Nov 27 17:34:12 crc kubenswrapper[4792]: I1127 17:34:12.015927 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b321af9-a492-4d9c-a75d-54987369d450","Type":"ContainerDied","Data":"11a8def03706eef57a188d519eeda1c26497bfb817c4d8204861b066a7de8b34"} Nov 27 17:34:12 crc kubenswrapper[4792]: I1127 17:34:12.015946 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b321af9-a492-4d9c-a75d-54987369d450","Type":"ContainerDied","Data":"39af710a682891d4657beb8492d695ad1dee609908ddb244b1824a8d79d6c91b"} Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.032711 4792 generic.go:334] "Generic (PLEG): container finished" podID="4b321af9-a492-4d9c-a75d-54987369d450" containerID="f3a9ba5ab5fedc51297b3d2e235d1537493ca02f0b3b94f4772dc29e6387cc19" exitCode=0 Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.033189 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b321af9-a492-4d9c-a75d-54987369d450","Type":"ContainerDied","Data":"f3a9ba5ab5fedc51297b3d2e235d1537493ca02f0b3b94f4772dc29e6387cc19"} Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.574606 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.589312 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-sg-core-conf-yaml\") pod \"4b321af9-a492-4d9c-a75d-54987369d450\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.589349 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-combined-ca-bundle\") pod \"4b321af9-a492-4d9c-a75d-54987369d450\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.589456 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-scripts\") pod \"4b321af9-a492-4d9c-a75d-54987369d450\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.589477 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgn8v\" (UniqueName: \"kubernetes.io/projected/4b321af9-a492-4d9c-a75d-54987369d450-kube-api-access-jgn8v\") pod \"4b321af9-a492-4d9c-a75d-54987369d450\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.589495 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b321af9-a492-4d9c-a75d-54987369d450-log-httpd\") pod \"4b321af9-a492-4d9c-a75d-54987369d450\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.589518 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b321af9-a492-4d9c-a75d-54987369d450-run-httpd\") pod \"4b321af9-a492-4d9c-a75d-54987369d450\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.589562 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-config-data\") pod \"4b321af9-a492-4d9c-a75d-54987369d450\" (UID: \"4b321af9-a492-4d9c-a75d-54987369d450\") " Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.590610 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b321af9-a492-4d9c-a75d-54987369d450-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4b321af9-a492-4d9c-a75d-54987369d450" (UID: "4b321af9-a492-4d9c-a75d-54987369d450"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.590713 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b321af9-a492-4d9c-a75d-54987369d450-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4b321af9-a492-4d9c-a75d-54987369d450" (UID: "4b321af9-a492-4d9c-a75d-54987369d450"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.601974 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b321af9-a492-4d9c-a75d-54987369d450-kube-api-access-jgn8v" (OuterVolumeSpecName: "kube-api-access-jgn8v") pod "4b321af9-a492-4d9c-a75d-54987369d450" (UID: "4b321af9-a492-4d9c-a75d-54987369d450"). InnerVolumeSpecName "kube-api-access-jgn8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.603815 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-scripts" (OuterVolumeSpecName: "scripts") pod "4b321af9-a492-4d9c-a75d-54987369d450" (UID: "4b321af9-a492-4d9c-a75d-54987369d450"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.637434 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4b321af9-a492-4d9c-a75d-54987369d450" (UID: "4b321af9-a492-4d9c-a75d-54987369d450"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.696317 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.696353 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.696368 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgn8v\" (UniqueName: \"kubernetes.io/projected/4b321af9-a492-4d9c-a75d-54987369d450-kube-api-access-jgn8v\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.696381 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b321af9-a492-4d9c-a75d-54987369d450-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.696392 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b321af9-a492-4d9c-a75d-54987369d450-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.742842 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b321af9-a492-4d9c-a75d-54987369d450" (UID: "4b321af9-a492-4d9c-a75d-54987369d450"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.757299 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-config-data" (OuterVolumeSpecName: "config-data") pod "4b321af9-a492-4d9c-a75d-54987369d450" (UID: "4b321af9-a492-4d9c-a75d-54987369d450"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.798543 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:13 crc kubenswrapper[4792]: I1127 17:34:13.798587 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b321af9-a492-4d9c-a75d-54987369d450-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.046186 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4b321af9-a492-4d9c-a75d-54987369d450","Type":"ContainerDied","Data":"bde441cbe4cd1e02111bb5a7e9fbe35da6d1f3271a94a28a8487d1ccfae02623"} Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.046237 4792 scope.go:117] "RemoveContainer" containerID="91c3a1fcb654de418ffea38f528a56fe1af92b4aa9da4495418b3f0a573ed63f" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.046253 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.124964 4792 scope.go:117] "RemoveContainer" containerID="11a8def03706eef57a188d519eeda1c26497bfb817c4d8204861b066a7de8b34" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.156179 4792 scope.go:117] "RemoveContainer" containerID="f3a9ba5ab5fedc51297b3d2e235d1537493ca02f0b3b94f4772dc29e6387cc19" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.158779 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.182218 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.193770 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:34:14 crc kubenswrapper[4792]: E1127 17:34:14.194330 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="sg-core" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.194346 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="sg-core" Nov 27 17:34:14 crc kubenswrapper[4792]: E1127 17:34:14.194371 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="proxy-httpd" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.194378 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="proxy-httpd" Nov 27 17:34:14 crc kubenswrapper[4792]: E1127 17:34:14.194410 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="ceilometer-central-agent" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.194416 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="ceilometer-central-agent" Nov 27 17:34:14 crc kubenswrapper[4792]: E1127 17:34:14.194425 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="ceilometer-notification-agent" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.194430 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="ceilometer-notification-agent" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.194659 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="sg-core" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.194676 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="proxy-httpd" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.194689 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="ceilometer-central-agent" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.194697 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b321af9-a492-4d9c-a75d-54987369d450" containerName="ceilometer-notification-agent" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.196756 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.199697 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.200658 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.200780 4792 scope.go:117] "RemoveContainer" containerID="39af710a682891d4657beb8492d695ad1dee609908ddb244b1824a8d79d6c91b" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.220733 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.307558 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914ff720-8cb0-4acc-aedb-d0371e5fbc20-run-httpd\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.307619 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.307719 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-config-data\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.307741 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.307784 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn8fc\" (UniqueName: \"kubernetes.io/projected/914ff720-8cb0-4acc-aedb-d0371e5fbc20-kube-api-access-wn8fc\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.307819 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914ff720-8cb0-4acc-aedb-d0371e5fbc20-log-httpd\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.307878 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-scripts\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.411406 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-config-data\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.411463 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.411524 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn8fc\" (UniqueName: \"kubernetes.io/projected/914ff720-8cb0-4acc-aedb-d0371e5fbc20-kube-api-access-wn8fc\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.411580 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914ff720-8cb0-4acc-aedb-d0371e5fbc20-log-httpd\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.411684 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-scripts\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.411736 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914ff720-8cb0-4acc-aedb-d0371e5fbc20-run-httpd\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.411782 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.414513 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914ff720-8cb0-4acc-aedb-d0371e5fbc20-run-httpd\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.414543 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914ff720-8cb0-4acc-aedb-d0371e5fbc20-log-httpd\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.421417 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.425919 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.428343 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-scripts\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.431629 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn8fc\" (UniqueName: \"kubernetes.io/projected/914ff720-8cb0-4acc-aedb-d0371e5fbc20-kube-api-access-wn8fc\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.439092 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-config-data\") pod \"ceilometer-0\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.648542 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.663524 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.726245 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9103aad5-7427-4713-8ee8-70ca12da3709-logs\") pod \"9103aad5-7427-4713-8ee8-70ca12da3709\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.726310 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9103aad5-7427-4713-8ee8-70ca12da3709-config-data\") pod \"9103aad5-7427-4713-8ee8-70ca12da3709\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.726335 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2mxv\" (UniqueName: \"kubernetes.io/projected/9103aad5-7427-4713-8ee8-70ca12da3709-kube-api-access-n2mxv\") pod \"9103aad5-7427-4713-8ee8-70ca12da3709\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.727182 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9103aad5-7427-4713-8ee8-70ca12da3709-logs" (OuterVolumeSpecName: "logs") pod "9103aad5-7427-4713-8ee8-70ca12da3709" (UID: "9103aad5-7427-4713-8ee8-70ca12da3709"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.729790 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b321af9-a492-4d9c-a75d-54987369d450" path="/var/lib/kubelet/pods/4b321af9-a492-4d9c-a75d-54987369d450/volumes" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.732699 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9103aad5-7427-4713-8ee8-70ca12da3709-kube-api-access-n2mxv" (OuterVolumeSpecName: "kube-api-access-n2mxv") pod "9103aad5-7427-4713-8ee8-70ca12da3709" (UID: "9103aad5-7427-4713-8ee8-70ca12da3709"). InnerVolumeSpecName "kube-api-access-n2mxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.766533 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9103aad5-7427-4713-8ee8-70ca12da3709-config-data" (OuterVolumeSpecName: "config-data") pod "9103aad5-7427-4713-8ee8-70ca12da3709" (UID: "9103aad5-7427-4713-8ee8-70ca12da3709"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.828126 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9103aad5-7427-4713-8ee8-70ca12da3709-combined-ca-bundle\") pod \"9103aad5-7427-4713-8ee8-70ca12da3709\" (UID: \"9103aad5-7427-4713-8ee8-70ca12da3709\") " Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.829328 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9103aad5-7427-4713-8ee8-70ca12da3709-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.829430 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2mxv\" (UniqueName: \"kubernetes.io/projected/9103aad5-7427-4713-8ee8-70ca12da3709-kube-api-access-n2mxv\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.829700 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9103aad5-7427-4713-8ee8-70ca12da3709-logs\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.875735 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9103aad5-7427-4713-8ee8-70ca12da3709-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9103aad5-7427-4713-8ee8-70ca12da3709" (UID: "9103aad5-7427-4713-8ee8-70ca12da3709"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:14 crc kubenswrapper[4792]: I1127 17:34:14.933296 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9103aad5-7427-4713-8ee8-70ca12da3709-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.087559 4792 generic.go:334] "Generic (PLEG): container finished" podID="9103aad5-7427-4713-8ee8-70ca12da3709" containerID="f133f13642d1b7eaac427d0102f924fbc6a5eab0a0bb64acc9bc9a2607cb6429" exitCode=0 Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.087630 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9103aad5-7427-4713-8ee8-70ca12da3709","Type":"ContainerDied","Data":"f133f13642d1b7eaac427d0102f924fbc6a5eab0a0bb64acc9bc9a2607cb6429"} Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.087675 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9103aad5-7427-4713-8ee8-70ca12da3709","Type":"ContainerDied","Data":"c07f0b6adc17f1d90ff26b07989d5bde3f7e1fdb88e6b2313c37b80a0585bd80"} Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.087690 4792 scope.go:117] "RemoveContainer" containerID="f133f13642d1b7eaac427d0102f924fbc6a5eab0a0bb64acc9bc9a2607cb6429" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.087821 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.102234 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"46da3dec-b250-40ad-98d5-5c0e81cc9fb2","Type":"ContainerStarted","Data":"24913ca728c7008cd5e28962815cd3decdfd495bdaa8cf8ddf03e9404b9b08dd"} Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.162558 4792 scope.go:117] "RemoveContainer" containerID="8049dff3801bd1eaf6bde9b6b1b1bcc8b00924a76eb88cd301da798e40fd69e1" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.192707 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.222153 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.230834 4792 scope.go:117] "RemoveContainer" containerID="f133f13642d1b7eaac427d0102f924fbc6a5eab0a0bb64acc9bc9a2607cb6429" Nov 27 17:34:15 crc kubenswrapper[4792]: E1127 17:34:15.231817 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f133f13642d1b7eaac427d0102f924fbc6a5eab0a0bb64acc9bc9a2607cb6429\": container with ID starting with f133f13642d1b7eaac427d0102f924fbc6a5eab0a0bb64acc9bc9a2607cb6429 not found: ID does not exist" containerID="f133f13642d1b7eaac427d0102f924fbc6a5eab0a0bb64acc9bc9a2607cb6429" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.231917 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f133f13642d1b7eaac427d0102f924fbc6a5eab0a0bb64acc9bc9a2607cb6429"} err="failed to get container status \"f133f13642d1b7eaac427d0102f924fbc6a5eab0a0bb64acc9bc9a2607cb6429\": rpc error: code = NotFound desc = could not find container \"f133f13642d1b7eaac427d0102f924fbc6a5eab0a0bb64acc9bc9a2607cb6429\": container with ID starting with f133f13642d1b7eaac427d0102f924fbc6a5eab0a0bb64acc9bc9a2607cb6429 not found: ID does not exist" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.231992 4792 scope.go:117] "RemoveContainer" containerID="8049dff3801bd1eaf6bde9b6b1b1bcc8b00924a76eb88cd301da798e40fd69e1" Nov 27 17:34:15 crc kubenswrapper[4792]: E1127 17:34:15.234044 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8049dff3801bd1eaf6bde9b6b1b1bcc8b00924a76eb88cd301da798e40fd69e1\": container with ID starting with 8049dff3801bd1eaf6bde9b6b1b1bcc8b00924a76eb88cd301da798e40fd69e1 not found: ID does not exist" containerID="8049dff3801bd1eaf6bde9b6b1b1bcc8b00924a76eb88cd301da798e40fd69e1" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.234129 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8049dff3801bd1eaf6bde9b6b1b1bcc8b00924a76eb88cd301da798e40fd69e1"} err="failed to get container status \"8049dff3801bd1eaf6bde9b6b1b1bcc8b00924a76eb88cd301da798e40fd69e1\": rpc error: code = NotFound desc = could not find container \"8049dff3801bd1eaf6bde9b6b1b1bcc8b00924a76eb88cd301da798e40fd69e1\": container with ID starting with 8049dff3801bd1eaf6bde9b6b1b1bcc8b00924a76eb88cd301da798e40fd69e1 not found: ID does not exist" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.245985 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 27 17:34:15 crc kubenswrapper[4792]: E1127 17:34:15.246535 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9103aad5-7427-4713-8ee8-70ca12da3709" containerName="nova-api-api" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.246550 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9103aad5-7427-4713-8ee8-70ca12da3709" containerName="nova-api-api" Nov 27 17:34:15 crc kubenswrapper[4792]: E1127 17:34:15.246601 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9103aad5-7427-4713-8ee8-70ca12da3709" containerName="nova-api-log" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.246608 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9103aad5-7427-4713-8ee8-70ca12da3709" containerName="nova-api-log" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.246825 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9103aad5-7427-4713-8ee8-70ca12da3709" containerName="nova-api-log" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.246855 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9103aad5-7427-4713-8ee8-70ca12da3709" containerName="nova-api-api" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.248098 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.253263 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.253434 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.259085 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.280258 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.346347 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.346506 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/367a1412-dad5-4592-9a00-1891f5c1812e-logs\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.346557 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-public-tls-certs\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.346590 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-config-data\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.346772 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.347045 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pd8f\" (UniqueName: \"kubernetes.io/projected/367a1412-dad5-4592-9a00-1891f5c1812e-kube-api-access-4pd8f\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.415199 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.449611 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pd8f\" (UniqueName: \"kubernetes.io/projected/367a1412-dad5-4592-9a00-1891f5c1812e-kube-api-access-4pd8f\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.449728 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.449797 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/367a1412-dad5-4592-9a00-1891f5c1812e-logs\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.449849 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-public-tls-certs\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.449875 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-config-data\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.449929 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.452306 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/367a1412-dad5-4592-9a00-1891f5c1812e-logs\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.455235 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-public-tls-certs\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.455436 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-config-data\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.456217 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.460021 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.467099 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pd8f\" (UniqueName: \"kubernetes.io/projected/367a1412-dad5-4592-9a00-1891f5c1812e-kube-api-access-4pd8f\") pod \"nova-api-0\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.572140 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 27 17:34:15 crc kubenswrapper[4792]: I1127 17:34:15.574259 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:34:15 crc kubenswrapper[4792]: W1127 17:34:15.583528 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod914ff720_8cb0_4acc_aedb_d0371e5fbc20.slice/crio-1e1c84d37d84159d519180268e4dc337d39ad5c03f4e1df83c061a60ebc38151 WatchSource:0}: Error finding container 1e1c84d37d84159d519180268e4dc337d39ad5c03f4e1df83c061a60ebc38151: Status 404 returned error can't find the container with id 1e1c84d37d84159d519180268e4dc337d39ad5c03f4e1df83c061a60ebc38151 Nov 27 17:34:16 crc kubenswrapper[4792]: I1127 17:34:16.059751 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:34:16 crc kubenswrapper[4792]: I1127 17:34:16.120057 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914ff720-8cb0-4acc-aedb-d0371e5fbc20","Type":"ContainerStarted","Data":"1e1c84d37d84159d519180268e4dc337d39ad5c03f4e1df83c061a60ebc38151"} Nov 27 17:34:16 crc kubenswrapper[4792]: I1127 17:34:16.309324 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:16 crc kubenswrapper[4792]: I1127 17:34:16.332857 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:16 crc kubenswrapper[4792]: I1127 17:34:16.724016 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9103aad5-7427-4713-8ee8-70ca12da3709" path="/var/lib/kubelet/pods/9103aad5-7427-4713-8ee8-70ca12da3709/volumes" Nov 27 17:34:16 crc kubenswrapper[4792]: I1127 17:34:16.725831 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.136505 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-api" containerID="cri-o://b54e2cd8f75c9e2331dd373d6fa3791552e7ab84664d71b6e0c7a88eba36c43f" gracePeriod=30 Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.136519 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-listener" containerID="cri-o://2ea7be85fbad41610b560e751963d445b6d002f21bc26d21416330fa5e7fe524" gracePeriod=30 Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.136578 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-evaluator" containerID="cri-o://872db1a4db784269628c1c64cc0d40bc796087d52f82e17a5b059fd26b1435f9" gracePeriod=30 Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.136570 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-notifier" containerID="cri-o://24913ca728c7008cd5e28962815cd3decdfd495bdaa8cf8ddf03e9404b9b08dd" gracePeriod=30 Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.136684 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"46da3dec-b250-40ad-98d5-5c0e81cc9fb2","Type":"ContainerStarted","Data":"2ea7be85fbad41610b560e751963d445b6d002f21bc26d21416330fa5e7fe524"} Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.141861 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914ff720-8cb0-4acc-aedb-d0371e5fbc20","Type":"ContainerStarted","Data":"5ec7d2b71219a616f0e1950459727e607560953936c6e2b20a8fd5b9cecef7bc"} Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.153851 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"367a1412-dad5-4592-9a00-1891f5c1812e","Type":"ContainerStarted","Data":"6901d26a6c5801da71ff50b23663661c66b242179007976b601bacebcfd5a925"} Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.153917 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"367a1412-dad5-4592-9a00-1891f5c1812e","Type":"ContainerStarted","Data":"8902abc3ed1ab61e7be06adb2517622e7995ee2d59e8b8f8739f30a52ee7f95e"} Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.153934 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"367a1412-dad5-4592-9a00-1891f5c1812e","Type":"ContainerStarted","Data":"b444828a65016a967ef8fdc17e4f93fb728d14497e8e77768e761e3bb2ebd992"} Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.187475 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.7228740350000002 podStartE2EDuration="10.187447048s" podCreationTimestamp="2025-11-27 17:34:07 +0000 UTC" firstStartedPulling="2025-11-27 17:34:08.908459572 +0000 UTC m=+1471.251285890" lastFinishedPulling="2025-11-27 17:34:16.373032585 +0000 UTC m=+1478.715858903" observedRunningTime="2025-11-27 17:34:17.175427898 +0000 UTC m=+1479.518254216" watchObservedRunningTime="2025-11-27 17:34:17.187447048 +0000 UTC m=+1479.530273396" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.218924 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.230791 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.230766256 podStartE2EDuration="2.230766256s" podCreationTimestamp="2025-11-27 17:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:34:17.209152798 +0000 UTC m=+1479.551979116" watchObservedRunningTime="2025-11-27 17:34:17.230766256 +0000 UTC m=+1479.573592584" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.534281 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-p5x75"] Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.536140 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.543334 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.543524 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.573710 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-p5x75"] Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.720172 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mktdw\" (UniqueName: \"kubernetes.io/projected/dfa9c9c8-933b-4098-ab35-e8d83489a194-kube-api-access-mktdw\") pod \"nova-cell1-cell-mapping-p5x75\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.720542 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p5x75\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.720675 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-config-data\") pod \"nova-cell1-cell-mapping-p5x75\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.720718 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-scripts\") pod \"nova-cell1-cell-mapping-p5x75\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.823432 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-config-data\") pod \"nova-cell1-cell-mapping-p5x75\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.823726 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-scripts\") pod \"nova-cell1-cell-mapping-p5x75\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.823982 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mktdw\" (UniqueName: \"kubernetes.io/projected/dfa9c9c8-933b-4098-ab35-e8d83489a194-kube-api-access-mktdw\") pod \"nova-cell1-cell-mapping-p5x75\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.824217 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p5x75\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.831487 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-config-data\") pod \"nova-cell1-cell-mapping-p5x75\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.832093 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-scripts\") pod \"nova-cell1-cell-mapping-p5x75\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.834402 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p5x75\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:17 crc kubenswrapper[4792]: I1127 17:34:17.846261 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mktdw\" (UniqueName: \"kubernetes.io/projected/dfa9c9c8-933b-4098-ab35-e8d83489a194-kube-api-access-mktdw\") pod \"nova-cell1-cell-mapping-p5x75\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:18 crc kubenswrapper[4792]: I1127 17:34:18.007338 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:18 crc kubenswrapper[4792]: I1127 17:34:18.182538 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914ff720-8cb0-4acc-aedb-d0371e5fbc20","Type":"ContainerStarted","Data":"1287f507874cebec7af4071a8713494f100bc68f9d6d52092e0c7f21f36109e2"} Nov 27 17:34:18 crc kubenswrapper[4792]: I1127 17:34:18.195890 4792 generic.go:334] "Generic (PLEG): container finished" podID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerID="24913ca728c7008cd5e28962815cd3decdfd495bdaa8cf8ddf03e9404b9b08dd" exitCode=0 Nov 27 17:34:18 crc kubenswrapper[4792]: I1127 17:34:18.195918 4792 generic.go:334] "Generic (PLEG): container finished" podID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerID="872db1a4db784269628c1c64cc0d40bc796087d52f82e17a5b059fd26b1435f9" exitCode=0 Nov 27 17:34:18 crc kubenswrapper[4792]: I1127 17:34:18.195928 4792 generic.go:334] "Generic (PLEG): container finished" podID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerID="b54e2cd8f75c9e2331dd373d6fa3791552e7ab84664d71b6e0c7a88eba36c43f" exitCode=0 Nov 27 17:34:18 crc kubenswrapper[4792]: I1127 17:34:18.197763 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"46da3dec-b250-40ad-98d5-5c0e81cc9fb2","Type":"ContainerDied","Data":"24913ca728c7008cd5e28962815cd3decdfd495bdaa8cf8ddf03e9404b9b08dd"} Nov 27 17:34:18 crc kubenswrapper[4792]: I1127 17:34:18.197793 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"46da3dec-b250-40ad-98d5-5c0e81cc9fb2","Type":"ContainerDied","Data":"872db1a4db784269628c1c64cc0d40bc796087d52f82e17a5b059fd26b1435f9"} Nov 27 17:34:18 crc kubenswrapper[4792]: I1127 17:34:18.197802 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"46da3dec-b250-40ad-98d5-5c0e81cc9fb2","Type":"ContainerDied","Data":"b54e2cd8f75c9e2331dd373d6fa3791552e7ab84664d71b6e0c7a88eba36c43f"} Nov 27 17:34:18 crc kubenswrapper[4792]: I1127 17:34:18.551255 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-p5x75"] Nov 27 17:34:18 crc kubenswrapper[4792]: I1127 17:34:18.598120 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:34:18 crc kubenswrapper[4792]: I1127 17:34:18.726330 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-xbg6r"] Nov 27 17:34:18 crc kubenswrapper[4792]: I1127 17:34:18.726722 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7877d89589-xbg6r" podUID="a181cdfc-ad1c-438d-945d-8e77cabce7c9" containerName="dnsmasq-dns" containerID="cri-o://354f568427650e7e6fe67658b867bd2110132c04b5a8bf57dca5c92c7c3d84c6" gracePeriod=10 Nov 27 17:34:18 crc kubenswrapper[4792]: I1127 17:34:18.923708 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7877d89589-xbg6r" podUID="a181cdfc-ad1c-438d-945d-8e77cabce7c9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.235:5353: connect: connection refused" Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.206624 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p5x75" event={"ID":"dfa9c9c8-933b-4098-ab35-e8d83489a194","Type":"ContainerStarted","Data":"a9ab281c068c43096c38ee9657bb793a033586fb1d91b6c27d90e5fe50e2dafa"} Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.207563 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p5x75" event={"ID":"dfa9c9c8-933b-4098-ab35-e8d83489a194","Type":"ContainerStarted","Data":"263f3156743b662bc89e9fdf730d5a5377bf4732caa185da7e7c0339e34aa104"} Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.212422 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914ff720-8cb0-4acc-aedb-d0371e5fbc20","Type":"ContainerStarted","Data":"2cc9eea9726f4e4c62e47264621ae1a9a90e53d996d309c30ea5b5a005ff352a"} Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.214239 4792 generic.go:334] "Generic (PLEG): container finished" podID="a181cdfc-ad1c-438d-945d-8e77cabce7c9" containerID="354f568427650e7e6fe67658b867bd2110132c04b5a8bf57dca5c92c7c3d84c6" exitCode=0 Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.214345 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-xbg6r" event={"ID":"a181cdfc-ad1c-438d-945d-8e77cabce7c9","Type":"ContainerDied","Data":"354f568427650e7e6fe67658b867bd2110132c04b5a8bf57dca5c92c7c3d84c6"} Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.221194 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-p5x75" podStartSLOduration=2.22117737 podStartE2EDuration="2.22117737s" podCreationTimestamp="2025-11-27 17:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:34:19.220863302 +0000 UTC m=+1481.563689610" watchObservedRunningTime="2025-11-27 17:34:19.22117737 +0000 UTC m=+1481.564003688" Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.391011 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.426399 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-dns-swift-storage-0\") pod \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.426494 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-ovsdbserver-sb\") pod \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.426551 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-dns-svc\") pod \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.426713 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-ovsdbserver-nb\") pod \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.426758 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xszmb\" (UniqueName: \"kubernetes.io/projected/a181cdfc-ad1c-438d-945d-8e77cabce7c9-kube-api-access-xszmb\") pod \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.455881 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a181cdfc-ad1c-438d-945d-8e77cabce7c9-kube-api-access-xszmb" (OuterVolumeSpecName: "kube-api-access-xszmb") pod "a181cdfc-ad1c-438d-945d-8e77cabce7c9" (UID: "a181cdfc-ad1c-438d-945d-8e77cabce7c9"). InnerVolumeSpecName "kube-api-access-xszmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.508383 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a181cdfc-ad1c-438d-945d-8e77cabce7c9" (UID: "a181cdfc-ad1c-438d-945d-8e77cabce7c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.523333 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a181cdfc-ad1c-438d-945d-8e77cabce7c9" (UID: "a181cdfc-ad1c-438d-945d-8e77cabce7c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.532613 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-config\") pod \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\" (UID: \"a181cdfc-ad1c-438d-945d-8e77cabce7c9\") " Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.533462 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.533481 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.533490 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xszmb\" (UniqueName: \"kubernetes.io/projected/a181cdfc-ad1c-438d-945d-8e77cabce7c9-kube-api-access-xszmb\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.540883 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a181cdfc-ad1c-438d-945d-8e77cabce7c9" (UID: "a181cdfc-ad1c-438d-945d-8e77cabce7c9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.551936 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a181cdfc-ad1c-438d-945d-8e77cabce7c9" (UID: "a181cdfc-ad1c-438d-945d-8e77cabce7c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.609538 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-config" (OuterVolumeSpecName: "config") pod "a181cdfc-ad1c-438d-945d-8e77cabce7c9" (UID: "a181cdfc-ad1c-438d-945d-8e77cabce7c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.635572 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.635598 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:19 crc kubenswrapper[4792]: I1127 17:34:19.635610 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a181cdfc-ad1c-438d-945d-8e77cabce7c9-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:20 crc kubenswrapper[4792]: I1127 17:34:20.227490 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-xbg6r" Nov 27 17:34:20 crc kubenswrapper[4792]: I1127 17:34:20.227957 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-xbg6r" event={"ID":"a181cdfc-ad1c-438d-945d-8e77cabce7c9","Type":"ContainerDied","Data":"2ed1630c9ec68ea7e4ecd2a8d4d375d686859b4f6e7ef74d4a0333945f558355"} Nov 27 17:34:20 crc kubenswrapper[4792]: I1127 17:34:20.228014 4792 scope.go:117] "RemoveContainer" containerID="354f568427650e7e6fe67658b867bd2110132c04b5a8bf57dca5c92c7c3d84c6" Nov 27 17:34:20 crc kubenswrapper[4792]: I1127 17:34:20.267116 4792 scope.go:117] "RemoveContainer" containerID="7fdf7178243f86bf11f6afea635306d501a1a1e5fdc5bd82c670770043c9b8b9" Nov 27 17:34:20 crc kubenswrapper[4792]: I1127 17:34:20.307491 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-xbg6r"] Nov 27 17:34:20 crc kubenswrapper[4792]: I1127 17:34:20.321207 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-xbg6r"] Nov 27 17:34:20 crc kubenswrapper[4792]: I1127 17:34:20.699770 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a181cdfc-ad1c-438d-945d-8e77cabce7c9" path="/var/lib/kubelet/pods/a181cdfc-ad1c-438d-945d-8e77cabce7c9/volumes" Nov 27 17:34:21 crc kubenswrapper[4792]: I1127 17:34:21.238833 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914ff720-8cb0-4acc-aedb-d0371e5fbc20","Type":"ContainerStarted","Data":"80ba8cf721835e56184061f8d152e074bbff858d602808a06e59818d7b7ec7ed"} Nov 27 17:34:21 crc kubenswrapper[4792]: I1127 17:34:21.240438 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="ceilometer-central-agent" containerID="cri-o://5ec7d2b71219a616f0e1950459727e607560953936c6e2b20a8fd5b9cecef7bc" gracePeriod=30 Nov 27 17:34:21 crc kubenswrapper[4792]: I1127 17:34:21.240756 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="sg-core" containerID="cri-o://2cc9eea9726f4e4c62e47264621ae1a9a90e53d996d309c30ea5b5a005ff352a" gracePeriod=30 Nov 27 17:34:21 crc kubenswrapper[4792]: I1127 17:34:21.240701 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="ceilometer-notification-agent" containerID="cri-o://1287f507874cebec7af4071a8713494f100bc68f9d6d52092e0c7f21f36109e2" gracePeriod=30 Nov 27 17:34:21 crc kubenswrapper[4792]: I1127 17:34:21.240715 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 27 17:34:21 crc kubenswrapper[4792]: I1127 17:34:21.240728 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="proxy-httpd" containerID="cri-o://80ba8cf721835e56184061f8d152e074bbff858d602808a06e59818d7b7ec7ed" gracePeriod=30 Nov 27 17:34:21 crc kubenswrapper[4792]: I1127 17:34:21.281418 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.12404759 podStartE2EDuration="7.281397222s" podCreationTimestamp="2025-11-27 17:34:14 +0000 UTC" firstStartedPulling="2025-11-27 17:34:15.596950179 +0000 UTC m=+1477.939776497" lastFinishedPulling="2025-11-27 17:34:20.754299821 +0000 UTC m=+1483.097126129" observedRunningTime="2025-11-27 17:34:21.275205117 +0000 UTC m=+1483.618031445" watchObservedRunningTime="2025-11-27 17:34:21.281397222 +0000 UTC m=+1483.624223550" Nov 27 17:34:22 crc kubenswrapper[4792]: I1127 17:34:22.259479 4792 generic.go:334] "Generic (PLEG): container finished" podID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerID="80ba8cf721835e56184061f8d152e074bbff858d602808a06e59818d7b7ec7ed" exitCode=0 Nov 27 17:34:22 crc kubenswrapper[4792]: I1127 17:34:22.259911 4792 generic.go:334] "Generic (PLEG): container finished" podID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerID="2cc9eea9726f4e4c62e47264621ae1a9a90e53d996d309c30ea5b5a005ff352a" exitCode=2 Nov 27 17:34:22 crc kubenswrapper[4792]: I1127 17:34:22.259934 4792 generic.go:334] "Generic (PLEG): container finished" podID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerID="1287f507874cebec7af4071a8713494f100bc68f9d6d52092e0c7f21f36109e2" exitCode=0 Nov 27 17:34:22 crc kubenswrapper[4792]: I1127 17:34:22.260066 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914ff720-8cb0-4acc-aedb-d0371e5fbc20","Type":"ContainerDied","Data":"80ba8cf721835e56184061f8d152e074bbff858d602808a06e59818d7b7ec7ed"} Nov 27 17:34:22 crc kubenswrapper[4792]: I1127 17:34:22.260139 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914ff720-8cb0-4acc-aedb-d0371e5fbc20","Type":"ContainerDied","Data":"2cc9eea9726f4e4c62e47264621ae1a9a90e53d996d309c30ea5b5a005ff352a"} Nov 27 17:34:22 crc kubenswrapper[4792]: I1127 17:34:22.260160 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914ff720-8cb0-4acc-aedb-d0371e5fbc20","Type":"ContainerDied","Data":"1287f507874cebec7af4071a8713494f100bc68f9d6d52092e0c7f21f36109e2"} Nov 27 17:34:25 crc kubenswrapper[4792]: I1127 17:34:25.305705 4792 generic.go:334] "Generic (PLEG): container finished" podID="dfa9c9c8-933b-4098-ab35-e8d83489a194" containerID="a9ab281c068c43096c38ee9657bb793a033586fb1d91b6c27d90e5fe50e2dafa" exitCode=0 Nov 27 17:34:25 crc kubenswrapper[4792]: I1127 17:34:25.306194 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p5x75" event={"ID":"dfa9c9c8-933b-4098-ab35-e8d83489a194","Type":"ContainerDied","Data":"a9ab281c068c43096c38ee9657bb793a033586fb1d91b6c27d90e5fe50e2dafa"} Nov 27 17:34:25 crc kubenswrapper[4792]: I1127 17:34:25.572873 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 27 17:34:25 crc kubenswrapper[4792]: I1127 17:34:25.573114 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.284853 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.323560 4792 generic.go:334] "Generic (PLEG): container finished" podID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerID="5ec7d2b71219a616f0e1950459727e607560953936c6e2b20a8fd5b9cecef7bc" exitCode=0 Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.323624 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.323669 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914ff720-8cb0-4acc-aedb-d0371e5fbc20","Type":"ContainerDied","Data":"5ec7d2b71219a616f0e1950459727e607560953936c6e2b20a8fd5b9cecef7bc"} Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.323754 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914ff720-8cb0-4acc-aedb-d0371e5fbc20","Type":"ContainerDied","Data":"1e1c84d37d84159d519180268e4dc337d39ad5c03f4e1df83c061a60ebc38151"} Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.323778 4792 scope.go:117] "RemoveContainer" containerID="80ba8cf721835e56184061f8d152e074bbff858d602808a06e59818d7b7ec7ed" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.339907 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-scripts\") pod \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.339987 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914ff720-8cb0-4acc-aedb-d0371e5fbc20-log-httpd\") pod \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.340033 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-config-data\") pod \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.340145 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn8fc\" (UniqueName: \"kubernetes.io/projected/914ff720-8cb0-4acc-aedb-d0371e5fbc20-kube-api-access-wn8fc\") pod \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.340221 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-sg-core-conf-yaml\") pod \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.340296 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914ff720-8cb0-4acc-aedb-d0371e5fbc20-run-httpd\") pod \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.340382 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-combined-ca-bundle\") pod \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\" (UID: \"914ff720-8cb0-4acc-aedb-d0371e5fbc20\") " Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.341366 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914ff720-8cb0-4acc-aedb-d0371e5fbc20-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "914ff720-8cb0-4acc-aedb-d0371e5fbc20" (UID: "914ff720-8cb0-4acc-aedb-d0371e5fbc20"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.346203 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914ff720-8cb0-4acc-aedb-d0371e5fbc20-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.356785 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914ff720-8cb0-4acc-aedb-d0371e5fbc20-kube-api-access-wn8fc" (OuterVolumeSpecName: "kube-api-access-wn8fc") pod "914ff720-8cb0-4acc-aedb-d0371e5fbc20" (UID: "914ff720-8cb0-4acc-aedb-d0371e5fbc20"). InnerVolumeSpecName "kube-api-access-wn8fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.357206 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914ff720-8cb0-4acc-aedb-d0371e5fbc20-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "914ff720-8cb0-4acc-aedb-d0371e5fbc20" (UID: "914ff720-8cb0-4acc-aedb-d0371e5fbc20"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.359869 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-scripts" (OuterVolumeSpecName: "scripts") pod "914ff720-8cb0-4acc-aedb-d0371e5fbc20" (UID: "914ff720-8cb0-4acc-aedb-d0371e5fbc20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.405024 4792 scope.go:117] "RemoveContainer" containerID="2cc9eea9726f4e4c62e47264621ae1a9a90e53d996d309c30ea5b5a005ff352a" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.448741 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn8fc\" (UniqueName: \"kubernetes.io/projected/914ff720-8cb0-4acc-aedb-d0371e5fbc20-kube-api-access-wn8fc\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.448769 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914ff720-8cb0-4acc-aedb-d0371e5fbc20-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.448778 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.450984 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "914ff720-8cb0-4acc-aedb-d0371e5fbc20" (UID: "914ff720-8cb0-4acc-aedb-d0371e5fbc20"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.487102 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "914ff720-8cb0-4acc-aedb-d0371e5fbc20" (UID: "914ff720-8cb0-4acc-aedb-d0371e5fbc20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.488751 4792 scope.go:117] "RemoveContainer" containerID="1287f507874cebec7af4071a8713494f100bc68f9d6d52092e0c7f21f36109e2" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.524514 4792 scope.go:117] "RemoveContainer" containerID="5ec7d2b71219a616f0e1950459727e607560953936c6e2b20a8fd5b9cecef7bc" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.534746 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-config-data" (OuterVolumeSpecName: "config-data") pod "914ff720-8cb0-4acc-aedb-d0371e5fbc20" (UID: "914ff720-8cb0-4acc-aedb-d0371e5fbc20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.554854 4792 scope.go:117] "RemoveContainer" containerID="80ba8cf721835e56184061f8d152e074bbff858d602808a06e59818d7b7ec7ed" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.555530 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.555559 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.555569 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914ff720-8cb0-4acc-aedb-d0371e5fbc20-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:26 crc kubenswrapper[4792]: E1127 17:34:26.555761 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ba8cf721835e56184061f8d152e074bbff858d602808a06e59818d7b7ec7ed\": container with ID starting with 80ba8cf721835e56184061f8d152e074bbff858d602808a06e59818d7b7ec7ed not found: ID does not exist" containerID="80ba8cf721835e56184061f8d152e074bbff858d602808a06e59818d7b7ec7ed" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.555796 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ba8cf721835e56184061f8d152e074bbff858d602808a06e59818d7b7ec7ed"} err="failed to get container status \"80ba8cf721835e56184061f8d152e074bbff858d602808a06e59818d7b7ec7ed\": rpc error: code = NotFound desc = could not find container \"80ba8cf721835e56184061f8d152e074bbff858d602808a06e59818d7b7ec7ed\": container with ID starting with 80ba8cf721835e56184061f8d152e074bbff858d602808a06e59818d7b7ec7ed not found: ID does not exist" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.555842 4792 scope.go:117] "RemoveContainer" containerID="2cc9eea9726f4e4c62e47264621ae1a9a90e53d996d309c30ea5b5a005ff352a" Nov 27 17:34:26 crc kubenswrapper[4792]: E1127 17:34:26.556152 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cc9eea9726f4e4c62e47264621ae1a9a90e53d996d309c30ea5b5a005ff352a\": container with ID starting with 2cc9eea9726f4e4c62e47264621ae1a9a90e53d996d309c30ea5b5a005ff352a not found: ID does not exist" containerID="2cc9eea9726f4e4c62e47264621ae1a9a90e53d996d309c30ea5b5a005ff352a" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.556190 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cc9eea9726f4e4c62e47264621ae1a9a90e53d996d309c30ea5b5a005ff352a"} err="failed to get container status \"2cc9eea9726f4e4c62e47264621ae1a9a90e53d996d309c30ea5b5a005ff352a\": rpc error: code = NotFound desc = could not find container \"2cc9eea9726f4e4c62e47264621ae1a9a90e53d996d309c30ea5b5a005ff352a\": container with ID starting with 2cc9eea9726f4e4c62e47264621ae1a9a90e53d996d309c30ea5b5a005ff352a not found: ID does not exist" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.556206 4792 scope.go:117] "RemoveContainer" containerID="1287f507874cebec7af4071a8713494f100bc68f9d6d52092e0c7f21f36109e2" Nov 27 17:34:26 crc kubenswrapper[4792]: E1127 17:34:26.556451 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1287f507874cebec7af4071a8713494f100bc68f9d6d52092e0c7f21f36109e2\": container with ID starting with 1287f507874cebec7af4071a8713494f100bc68f9d6d52092e0c7f21f36109e2 not found: ID does not exist" containerID="1287f507874cebec7af4071a8713494f100bc68f9d6d52092e0c7f21f36109e2" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.556483 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1287f507874cebec7af4071a8713494f100bc68f9d6d52092e0c7f21f36109e2"} err="failed to get container status \"1287f507874cebec7af4071a8713494f100bc68f9d6d52092e0c7f21f36109e2\": rpc error: code = NotFound desc = could not find container \"1287f507874cebec7af4071a8713494f100bc68f9d6d52092e0c7f21f36109e2\": container with ID starting with 1287f507874cebec7af4071a8713494f100bc68f9d6d52092e0c7f21f36109e2 not found: ID does not exist" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.556508 4792 scope.go:117] "RemoveContainer" containerID="5ec7d2b71219a616f0e1950459727e607560953936c6e2b20a8fd5b9cecef7bc" Nov 27 17:34:26 crc kubenswrapper[4792]: E1127 17:34:26.556901 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ec7d2b71219a616f0e1950459727e607560953936c6e2b20a8fd5b9cecef7bc\": container with ID starting with 5ec7d2b71219a616f0e1950459727e607560953936c6e2b20a8fd5b9cecef7bc not found: ID does not exist" containerID="5ec7d2b71219a616f0e1950459727e607560953936c6e2b20a8fd5b9cecef7bc" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.556927 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ec7d2b71219a616f0e1950459727e607560953936c6e2b20a8fd5b9cecef7bc"} err="failed to get container status \"5ec7d2b71219a616f0e1950459727e607560953936c6e2b20a8fd5b9cecef7bc\": rpc error: code = NotFound desc = could not find container \"5ec7d2b71219a616f0e1950459727e607560953936c6e2b20a8fd5b9cecef7bc\": container with ID starting with 5ec7d2b71219a616f0e1950459727e607560953936c6e2b20a8fd5b9cecef7bc not found: ID does not exist" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.591893 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="367a1412-dad5-4592-9a00-1891f5c1812e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.250:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.592529 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="367a1412-dad5-4592-9a00-1891f5c1812e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.250:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.684800 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.719055 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.725103 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:34:26 crc kubenswrapper[4792]: E1127 17:34:26.725560 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="ceilometer-notification-agent" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.725571 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="ceilometer-notification-agent" Nov 27 17:34:26 crc kubenswrapper[4792]: E1127 17:34:26.725607 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="proxy-httpd" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.725613 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="proxy-httpd" Nov 27 17:34:26 crc kubenswrapper[4792]: E1127 17:34:26.725624 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="sg-core" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.725630 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="sg-core" Nov 27 17:34:26 crc kubenswrapper[4792]: E1127 17:34:26.725654 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a181cdfc-ad1c-438d-945d-8e77cabce7c9" containerName="dnsmasq-dns" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.725663 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a181cdfc-ad1c-438d-945d-8e77cabce7c9" containerName="dnsmasq-dns" Nov 27 17:34:26 crc kubenswrapper[4792]: E1127 17:34:26.725673 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="ceilometer-central-agent" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.725680 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="ceilometer-central-agent" Nov 27 17:34:26 crc kubenswrapper[4792]: E1127 17:34:26.725704 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a181cdfc-ad1c-438d-945d-8e77cabce7c9" containerName="init" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.725712 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a181cdfc-ad1c-438d-945d-8e77cabce7c9" containerName="init" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.725925 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a181cdfc-ad1c-438d-945d-8e77cabce7c9" containerName="dnsmasq-dns" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.725944 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="sg-core" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.725953 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="ceilometer-notification-agent" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.725966 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="proxy-httpd" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.725976 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" containerName="ceilometer-central-agent" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.728275 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.730899 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.731062 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.749081 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.869871 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.869939 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pblk\" (UniqueName: \"kubernetes.io/projected/ce94cd81-3f45-4307-8bbb-6343969d65d7-kube-api-access-6pblk\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.870039 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-scripts\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.870163 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.870227 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce94cd81-3f45-4307-8bbb-6343969d65d7-run-httpd\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.870475 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-config-data\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.870511 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce94cd81-3f45-4307-8bbb-6343969d65d7-log-httpd\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.903355 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.972040 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-scripts\") pod \"dfa9c9c8-933b-4098-ab35-e8d83489a194\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.972135 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-config-data\") pod \"dfa9c9c8-933b-4098-ab35-e8d83489a194\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.972368 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mktdw\" (UniqueName: \"kubernetes.io/projected/dfa9c9c8-933b-4098-ab35-e8d83489a194-kube-api-access-mktdw\") pod \"dfa9c9c8-933b-4098-ab35-e8d83489a194\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.972498 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-combined-ca-bundle\") pod \"dfa9c9c8-933b-4098-ab35-e8d83489a194\" (UID: \"dfa9c9c8-933b-4098-ab35-e8d83489a194\") " Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.972863 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-scripts\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.972959 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.972989 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce94cd81-3f45-4307-8bbb-6343969d65d7-run-httpd\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.973178 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce94cd81-3f45-4307-8bbb-6343969d65d7-log-httpd\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.973203 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-config-data\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.973378 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.973405 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pblk\" (UniqueName: \"kubernetes.io/projected/ce94cd81-3f45-4307-8bbb-6343969d65d7-kube-api-access-6pblk\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.976432 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfa9c9c8-933b-4098-ab35-e8d83489a194-kube-api-access-mktdw" (OuterVolumeSpecName: "kube-api-access-mktdw") pod "dfa9c9c8-933b-4098-ab35-e8d83489a194" (UID: "dfa9c9c8-933b-4098-ab35-e8d83489a194"). InnerVolumeSpecName "kube-api-access-mktdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.977051 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce94cd81-3f45-4307-8bbb-6343969d65d7-run-httpd\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.977208 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce94cd81-3f45-4307-8bbb-6343969d65d7-log-httpd\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.977600 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-scripts\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.978609 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-scripts" (OuterVolumeSpecName: "scripts") pod "dfa9c9c8-933b-4098-ab35-e8d83489a194" (UID: "dfa9c9c8-933b-4098-ab35-e8d83489a194"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.979936 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.981330 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.983691 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-config-data\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:26 crc kubenswrapper[4792]: I1127 17:34:26.991131 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pblk\" (UniqueName: \"kubernetes.io/projected/ce94cd81-3f45-4307-8bbb-6343969d65d7-kube-api-access-6pblk\") pod \"ceilometer-0\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " pod="openstack/ceilometer-0" Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.010615 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-config-data" (OuterVolumeSpecName: "config-data") pod "dfa9c9c8-933b-4098-ab35-e8d83489a194" (UID: "dfa9c9c8-933b-4098-ab35-e8d83489a194"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.018575 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfa9c9c8-933b-4098-ab35-e8d83489a194" (UID: "dfa9c9c8-933b-4098-ab35-e8d83489a194"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.060789 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.075636 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.075697 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.075711 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mktdw\" (UniqueName: \"kubernetes.io/projected/dfa9c9c8-933b-4098-ab35-e8d83489a194-kube-api-access-mktdw\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.075726 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa9c9c8-933b-4098-ab35-e8d83489a194-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.338186 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p5x75" event={"ID":"dfa9c9c8-933b-4098-ab35-e8d83489a194","Type":"ContainerDied","Data":"263f3156743b662bc89e9fdf730d5a5377bf4732caa185da7e7c0339e34aa104"} Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.338501 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="263f3156743b662bc89e9fdf730d5a5377bf4732caa185da7e7c0339e34aa104" Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.338204 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p5x75" Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.644156 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.644369 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="367a1412-dad5-4592-9a00-1891f5c1812e" containerName="nova-api-log" containerID="cri-o://8902abc3ed1ab61e7be06adb2517622e7995ee2d59e8b8f8739f30a52ee7f95e" gracePeriod=30 Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.644460 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="367a1412-dad5-4592-9a00-1891f5c1812e" containerName="nova-api-api" containerID="cri-o://6901d26a6c5801da71ff50b23663661c66b242179007976b601bacebcfd5a925" gracePeriod=30 Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.697710 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.697910 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="13580cf5-25de-4e01-9443-26e5cdc7a50b" containerName="nova-scheduler-scheduler" containerID="cri-o://1fba22a6ee53f985d6b1ec3a9058c768c3376907d7c7be9930c90c2f84bfb034" gracePeriod=30 Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.725576 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.768814 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.769248 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" containerName="nova-metadata-log" containerID="cri-o://51ccc6fa544ca15652562c107e8100094758f12ec1a5d680f583d32040c124b3" gracePeriod=30 Nov 27 17:34:27 crc kubenswrapper[4792]: I1127 17:34:27.770123 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" containerName="nova-metadata-metadata" containerID="cri-o://346560f83420792d89a817824de81ee159ac70f54da5e9851bfb8d906dca38db" gracePeriod=30 Nov 27 17:34:28 crc kubenswrapper[4792]: I1127 17:34:28.361183 4792 generic.go:334] "Generic (PLEG): container finished" podID="906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" containerID="51ccc6fa544ca15652562c107e8100094758f12ec1a5d680f583d32040c124b3" exitCode=143 Nov 27 17:34:28 crc kubenswrapper[4792]: I1127 17:34:28.361571 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26","Type":"ContainerDied","Data":"51ccc6fa544ca15652562c107e8100094758f12ec1a5d680f583d32040c124b3"} Nov 27 17:34:28 crc kubenswrapper[4792]: I1127 17:34:28.365270 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce94cd81-3f45-4307-8bbb-6343969d65d7","Type":"ContainerStarted","Data":"2de34c0c3163edf2760e07eae17c6b955b862ed2eb570db40ca10317b396c65a"} Nov 27 17:34:28 crc kubenswrapper[4792]: I1127 17:34:28.368944 4792 generic.go:334] "Generic (PLEG): container finished" podID="367a1412-dad5-4592-9a00-1891f5c1812e" containerID="8902abc3ed1ab61e7be06adb2517622e7995ee2d59e8b8f8739f30a52ee7f95e" exitCode=143 Nov 27 17:34:28 crc kubenswrapper[4792]: I1127 17:34:28.368980 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"367a1412-dad5-4592-9a00-1891f5c1812e","Type":"ContainerDied","Data":"8902abc3ed1ab61e7be06adb2517622e7995ee2d59e8b8f8739f30a52ee7f95e"} Nov 27 17:34:28 crc kubenswrapper[4792]: I1127 17:34:28.700788 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="914ff720-8cb0-4acc-aedb-d0371e5fbc20" path="/var/lib/kubelet/pods/914ff720-8cb0-4acc-aedb-d0371e5fbc20/volumes" Nov 27 17:34:29 crc kubenswrapper[4792]: I1127 17:34:29.385048 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce94cd81-3f45-4307-8bbb-6343969d65d7","Type":"ContainerStarted","Data":"d223da1b712ea4487be67cad355226b064ddecdce7d20a3e1b0c520f94a1abb6"} Nov 27 17:34:29 crc kubenswrapper[4792]: I1127 17:34:29.385574 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce94cd81-3f45-4307-8bbb-6343969d65d7","Type":"ContainerStarted","Data":"d690627b211d38f6a1490fa2cefbdf50b22589a00987ffebdf70d077634a52e8"} Nov 27 17:34:30 crc kubenswrapper[4792]: I1127 17:34:30.395962 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce94cd81-3f45-4307-8bbb-6343969d65d7","Type":"ContainerStarted","Data":"b3c4ea63a9c1a0ee3cf71613db436fe91a5c2f2f77083ab5bb2c59479d4d0bb9"} Nov 27 17:34:31 crc kubenswrapper[4792]: I1127 17:34:31.206275 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.239:8775/\": read tcp 10.217.0.2:35058->10.217.0.239:8775: read: connection reset by peer" Nov 27 17:34:31 crc kubenswrapper[4792]: I1127 17:34:31.206361 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.239:8775/\": read tcp 10.217.0.2:35054->10.217.0.239:8775: read: connection reset by peer" Nov 27 17:34:32 crc kubenswrapper[4792]: E1127 17:34:32.145450 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1fba22a6ee53f985d6b1ec3a9058c768c3376907d7c7be9930c90c2f84bfb034 is running failed: container process not found" containerID="1fba22a6ee53f985d6b1ec3a9058c768c3376907d7c7be9930c90c2f84bfb034" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 27 17:34:32 crc kubenswrapper[4792]: E1127 17:34:32.147743 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1fba22a6ee53f985d6b1ec3a9058c768c3376907d7c7be9930c90c2f84bfb034 is running failed: container process not found" containerID="1fba22a6ee53f985d6b1ec3a9058c768c3376907d7c7be9930c90c2f84bfb034" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 27 17:34:32 crc kubenswrapper[4792]: E1127 17:34:32.150851 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1fba22a6ee53f985d6b1ec3a9058c768c3376907d7c7be9930c90c2f84bfb034 is running failed: container process not found" containerID="1fba22a6ee53f985d6b1ec3a9058c768c3376907d7c7be9930c90c2f84bfb034" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 27 17:34:32 crc kubenswrapper[4792]: E1127 17:34:32.150907 4792 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1fba22a6ee53f985d6b1ec3a9058c768c3376907d7c7be9930c90c2f84bfb034 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="13580cf5-25de-4e01-9443-26e5cdc7a50b" containerName="nova-scheduler-scheduler" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.303198 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.441906 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-nova-metadata-tls-certs\") pod \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.442069 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-config-data\") pod \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.442121 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-combined-ca-bundle\") pod \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.442253 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dvd8\" (UniqueName: \"kubernetes.io/projected/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-kube-api-access-2dvd8\") pod \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.442277 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-logs\") pod \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\" (UID: \"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26\") " Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.444093 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-logs" (OuterVolumeSpecName: "logs") pod "906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" (UID: "906b1adb-a9c0-4ba1-94eb-80bf7f0aef26"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.470904 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce94cd81-3f45-4307-8bbb-6343969d65d7","Type":"ContainerStarted","Data":"4a7eb608905037b5b90675493d2aa727bf1c1ba2202f0ed70eb48ffc3135e976"} Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.471437 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.477427 4792 generic.go:334] "Generic (PLEG): container finished" podID="367a1412-dad5-4592-9a00-1891f5c1812e" containerID="6901d26a6c5801da71ff50b23663661c66b242179007976b601bacebcfd5a925" exitCode=0 Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.477480 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"367a1412-dad5-4592-9a00-1891f5c1812e","Type":"ContainerDied","Data":"6901d26a6c5801da71ff50b23663661c66b242179007976b601bacebcfd5a925"} Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.482376 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-kube-api-access-2dvd8" (OuterVolumeSpecName: "kube-api-access-2dvd8") pod "906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" (UID: "906b1adb-a9c0-4ba1-94eb-80bf7f0aef26"). InnerVolumeSpecName "kube-api-access-2dvd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.484886 4792 generic.go:334] "Generic (PLEG): container finished" podID="13580cf5-25de-4e01-9443-26e5cdc7a50b" containerID="1fba22a6ee53f985d6b1ec3a9058c768c3376907d7c7be9930c90c2f84bfb034" exitCode=0 Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.484962 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"13580cf5-25de-4e01-9443-26e5cdc7a50b","Type":"ContainerDied","Data":"1fba22a6ee53f985d6b1ec3a9058c768c3376907d7c7be9930c90c2f84bfb034"} Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.494744 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-config-data" (OuterVolumeSpecName: "config-data") pod "906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" (UID: "906b1adb-a9c0-4ba1-94eb-80bf7f0aef26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.503306 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.495029298 podStartE2EDuration="6.503280979s" podCreationTimestamp="2025-11-27 17:34:26 +0000 UTC" firstStartedPulling="2025-11-27 17:34:27.732736743 +0000 UTC m=+1490.075563061" lastFinishedPulling="2025-11-27 17:34:31.740988384 +0000 UTC m=+1494.083814742" observedRunningTime="2025-11-27 17:34:32.495064434 +0000 UTC m=+1494.837890752" watchObservedRunningTime="2025-11-27 17:34:32.503280979 +0000 UTC m=+1494.846107297" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.503822 4792 generic.go:334] "Generic (PLEG): container finished" podID="906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" containerID="346560f83420792d89a817824de81ee159ac70f54da5e9851bfb8d906dca38db" exitCode=0 Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.503862 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26","Type":"ContainerDied","Data":"346560f83420792d89a817824de81ee159ac70f54da5e9851bfb8d906dca38db"} Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.503889 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"906b1adb-a9c0-4ba1-94eb-80bf7f0aef26","Type":"ContainerDied","Data":"0d71d7732c9a5ccc99e208c15e52f1c3b2c02762bee249a062fc78bfab038885"} Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.503905 4792 scope.go:117] "RemoveContainer" containerID="346560f83420792d89a817824de81ee159ac70f54da5e9851bfb8d906dca38db" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.504034 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.556854 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.556892 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dvd8\" (UniqueName: \"kubernetes.io/projected/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-kube-api-access-2dvd8\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.556904 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-logs\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.563611 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" (UID: "906b1adb-a9c0-4ba1-94eb-80bf7f0aef26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.627000 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" (UID: "906b1adb-a9c0-4ba1-94eb-80bf7f0aef26"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.659063 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.659405 4792 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.684621 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.733554 4792 scope.go:117] "RemoveContainer" containerID="51ccc6fa544ca15652562c107e8100094758f12ec1a5d680f583d32040c124b3" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.760593 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kvzr\" (UniqueName: \"kubernetes.io/projected/13580cf5-25de-4e01-9443-26e5cdc7a50b-kube-api-access-9kvzr\") pod \"13580cf5-25de-4e01-9443-26e5cdc7a50b\" (UID: \"13580cf5-25de-4e01-9443-26e5cdc7a50b\") " Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.760766 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13580cf5-25de-4e01-9443-26e5cdc7a50b-config-data\") pod \"13580cf5-25de-4e01-9443-26e5cdc7a50b\" (UID: \"13580cf5-25de-4e01-9443-26e5cdc7a50b\") " Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.760847 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13580cf5-25de-4e01-9443-26e5cdc7a50b-combined-ca-bundle\") pod \"13580cf5-25de-4e01-9443-26e5cdc7a50b\" (UID: \"13580cf5-25de-4e01-9443-26e5cdc7a50b\") " Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.770448 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13580cf5-25de-4e01-9443-26e5cdc7a50b-kube-api-access-9kvzr" (OuterVolumeSpecName: "kube-api-access-9kvzr") pod "13580cf5-25de-4e01-9443-26e5cdc7a50b" (UID: "13580cf5-25de-4e01-9443-26e5cdc7a50b"). InnerVolumeSpecName "kube-api-access-9kvzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.788496 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.812963 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13580cf5-25de-4e01-9443-26e5cdc7a50b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13580cf5-25de-4e01-9443-26e5cdc7a50b" (UID: "13580cf5-25de-4e01-9443-26e5cdc7a50b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.821296 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13580cf5-25de-4e01-9443-26e5cdc7a50b-config-data" (OuterVolumeSpecName: "config-data") pod "13580cf5-25de-4e01-9443-26e5cdc7a50b" (UID: "13580cf5-25de-4e01-9443-26e5cdc7a50b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.823457 4792 scope.go:117] "RemoveContainer" containerID="346560f83420792d89a817824de81ee159ac70f54da5e9851bfb8d906dca38db" Nov 27 17:34:32 crc kubenswrapper[4792]: E1127 17:34:32.823906 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"346560f83420792d89a817824de81ee159ac70f54da5e9851bfb8d906dca38db\": container with ID starting with 346560f83420792d89a817824de81ee159ac70f54da5e9851bfb8d906dca38db not found: ID does not exist" containerID="346560f83420792d89a817824de81ee159ac70f54da5e9851bfb8d906dca38db" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.823936 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"346560f83420792d89a817824de81ee159ac70f54da5e9851bfb8d906dca38db"} err="failed to get container status \"346560f83420792d89a817824de81ee159ac70f54da5e9851bfb8d906dca38db\": rpc error: code = NotFound desc = could not find container \"346560f83420792d89a817824de81ee159ac70f54da5e9851bfb8d906dca38db\": container with ID starting with 346560f83420792d89a817824de81ee159ac70f54da5e9851bfb8d906dca38db not found: ID does not exist" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.823954 4792 scope.go:117] "RemoveContainer" containerID="51ccc6fa544ca15652562c107e8100094758f12ec1a5d680f583d32040c124b3" Nov 27 17:34:32 crc kubenswrapper[4792]: E1127 17:34:32.824233 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51ccc6fa544ca15652562c107e8100094758f12ec1a5d680f583d32040c124b3\": container with ID starting with 51ccc6fa544ca15652562c107e8100094758f12ec1a5d680f583d32040c124b3 not found: ID does not exist" containerID="51ccc6fa544ca15652562c107e8100094758f12ec1a5d680f583d32040c124b3" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.824252 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51ccc6fa544ca15652562c107e8100094758f12ec1a5d680f583d32040c124b3"} err="failed to get container status \"51ccc6fa544ca15652562c107e8100094758f12ec1a5d680f583d32040c124b3\": rpc error: code = NotFound desc = could not find container \"51ccc6fa544ca15652562c107e8100094758f12ec1a5d680f583d32040c124b3\": container with ID starting with 51ccc6fa544ca15652562c107e8100094758f12ec1a5d680f583d32040c124b3 not found: ID does not exist" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.862503 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-internal-tls-certs\") pod \"367a1412-dad5-4592-9a00-1891f5c1812e\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.862545 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-combined-ca-bundle\") pod \"367a1412-dad5-4592-9a00-1891f5c1812e\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.862739 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pd8f\" (UniqueName: \"kubernetes.io/projected/367a1412-dad5-4592-9a00-1891f5c1812e-kube-api-access-4pd8f\") pod \"367a1412-dad5-4592-9a00-1891f5c1812e\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.862797 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-config-data\") pod \"367a1412-dad5-4592-9a00-1891f5c1812e\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.862866 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-public-tls-certs\") pod \"367a1412-dad5-4592-9a00-1891f5c1812e\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.862901 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/367a1412-dad5-4592-9a00-1891f5c1812e-logs\") pod \"367a1412-dad5-4592-9a00-1891f5c1812e\" (UID: \"367a1412-dad5-4592-9a00-1891f5c1812e\") " Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.863388 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13580cf5-25de-4e01-9443-26e5cdc7a50b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.863401 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kvzr\" (UniqueName: \"kubernetes.io/projected/13580cf5-25de-4e01-9443-26e5cdc7a50b-kube-api-access-9kvzr\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.863412 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13580cf5-25de-4e01-9443-26e5cdc7a50b-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.863812 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/367a1412-dad5-4592-9a00-1891f5c1812e-logs" (OuterVolumeSpecName: "logs") pod "367a1412-dad5-4592-9a00-1891f5c1812e" (UID: "367a1412-dad5-4592-9a00-1891f5c1812e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.866888 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.868761 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/367a1412-dad5-4592-9a00-1891f5c1812e-kube-api-access-4pd8f" (OuterVolumeSpecName: "kube-api-access-4pd8f") pod "367a1412-dad5-4592-9a00-1891f5c1812e" (UID: "367a1412-dad5-4592-9a00-1891f5c1812e"). InnerVolumeSpecName "kube-api-access-4pd8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.883225 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.894171 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:34:32 crc kubenswrapper[4792]: E1127 17:34:32.894879 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="367a1412-dad5-4592-9a00-1891f5c1812e" containerName="nova-api-log" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.894904 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="367a1412-dad5-4592-9a00-1891f5c1812e" containerName="nova-api-log" Nov 27 17:34:32 crc kubenswrapper[4792]: E1127 17:34:32.894932 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13580cf5-25de-4e01-9443-26e5cdc7a50b" containerName="nova-scheduler-scheduler" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.894943 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="13580cf5-25de-4e01-9443-26e5cdc7a50b" containerName="nova-scheduler-scheduler" Nov 27 17:34:32 crc kubenswrapper[4792]: E1127 17:34:32.895394 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="367a1412-dad5-4592-9a00-1891f5c1812e" containerName="nova-api-api" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.895526 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="367a1412-dad5-4592-9a00-1891f5c1812e" containerName="nova-api-api" Nov 27 17:34:32 crc kubenswrapper[4792]: E1127 17:34:32.895556 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" containerName="nova-metadata-metadata" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.895567 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" containerName="nova-metadata-metadata" Nov 27 17:34:32 crc kubenswrapper[4792]: E1127 17:34:32.895589 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa9c9c8-933b-4098-ab35-e8d83489a194" containerName="nova-manage" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.895597 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa9c9c8-933b-4098-ab35-e8d83489a194" containerName="nova-manage" Nov 27 17:34:32 crc kubenswrapper[4792]: E1127 17:34:32.895613 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" containerName="nova-metadata-log" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.895623 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" containerName="nova-metadata-log" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.895953 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="367a1412-dad5-4592-9a00-1891f5c1812e" containerName="nova-api-api" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.895987 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa9c9c8-933b-4098-ab35-e8d83489a194" containerName="nova-manage" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.895999 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" containerName="nova-metadata-metadata" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.896016 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="367a1412-dad5-4592-9a00-1891f5c1812e" containerName="nova-api-log" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.896038 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" containerName="nova-metadata-log" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.896060 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="13580cf5-25de-4e01-9443-26e5cdc7a50b" containerName="nova-scheduler-scheduler" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.909752 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "367a1412-dad5-4592-9a00-1891f5c1812e" (UID: "367a1412-dad5-4592-9a00-1891f5c1812e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.913554 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.918925 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.923891 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-config-data" (OuterVolumeSpecName: "config-data") pod "367a1412-dad5-4592-9a00-1891f5c1812e" (UID: "367a1412-dad5-4592-9a00-1891f5c1812e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.925049 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.925841 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.954731 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "367a1412-dad5-4592-9a00-1891f5c1812e" (UID: "367a1412-dad5-4592-9a00-1891f5c1812e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.965080 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z82cp\" (UniqueName: \"kubernetes.io/projected/76bd9753-9395-4ae1-a0c5-10c1ee3f0347-kube-api-access-z82cp\") pod \"nova-metadata-0\" (UID: \"76bd9753-9395-4ae1-a0c5-10c1ee3f0347\") " pod="openstack/nova-metadata-0" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.965287 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/76bd9753-9395-4ae1-a0c5-10c1ee3f0347-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"76bd9753-9395-4ae1-a0c5-10c1ee3f0347\") " pod="openstack/nova-metadata-0" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.965375 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76bd9753-9395-4ae1-a0c5-10c1ee3f0347-config-data\") pod \"nova-metadata-0\" (UID: \"76bd9753-9395-4ae1-a0c5-10c1ee3f0347\") " pod="openstack/nova-metadata-0" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.965466 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76bd9753-9395-4ae1-a0c5-10c1ee3f0347-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"76bd9753-9395-4ae1-a0c5-10c1ee3f0347\") " pod="openstack/nova-metadata-0" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.965531 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "367a1412-dad5-4592-9a00-1891f5c1812e" (UID: "367a1412-dad5-4592-9a00-1891f5c1812e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.965541 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76bd9753-9395-4ae1-a0c5-10c1ee3f0347-logs\") pod \"nova-metadata-0\" (UID: \"76bd9753-9395-4ae1-a0c5-10c1ee3f0347\") " pod="openstack/nova-metadata-0" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.965944 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.965978 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.965992 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pd8f\" (UniqueName: \"kubernetes.io/projected/367a1412-dad5-4592-9a00-1891f5c1812e-kube-api-access-4pd8f\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.966004 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.966012 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/367a1412-dad5-4592-9a00-1891f5c1812e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:32 crc kubenswrapper[4792]: I1127 17:34:32.966022 4792 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/367a1412-dad5-4592-9a00-1891f5c1812e-logs\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.068861 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/76bd9753-9395-4ae1-a0c5-10c1ee3f0347-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"76bd9753-9395-4ae1-a0c5-10c1ee3f0347\") " pod="openstack/nova-metadata-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.068921 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76bd9753-9395-4ae1-a0c5-10c1ee3f0347-config-data\") pod \"nova-metadata-0\" (UID: \"76bd9753-9395-4ae1-a0c5-10c1ee3f0347\") " pod="openstack/nova-metadata-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.068953 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76bd9753-9395-4ae1-a0c5-10c1ee3f0347-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"76bd9753-9395-4ae1-a0c5-10c1ee3f0347\") " pod="openstack/nova-metadata-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.069001 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76bd9753-9395-4ae1-a0c5-10c1ee3f0347-logs\") pod \"nova-metadata-0\" (UID: \"76bd9753-9395-4ae1-a0c5-10c1ee3f0347\") " pod="openstack/nova-metadata-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.069105 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z82cp\" (UniqueName: \"kubernetes.io/projected/76bd9753-9395-4ae1-a0c5-10c1ee3f0347-kube-api-access-z82cp\") pod \"nova-metadata-0\" (UID: \"76bd9753-9395-4ae1-a0c5-10c1ee3f0347\") " pod="openstack/nova-metadata-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.069970 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76bd9753-9395-4ae1-a0c5-10c1ee3f0347-logs\") pod \"nova-metadata-0\" (UID: \"76bd9753-9395-4ae1-a0c5-10c1ee3f0347\") " pod="openstack/nova-metadata-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.073263 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76bd9753-9395-4ae1-a0c5-10c1ee3f0347-config-data\") pod \"nova-metadata-0\" (UID: \"76bd9753-9395-4ae1-a0c5-10c1ee3f0347\") " pod="openstack/nova-metadata-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.073309 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76bd9753-9395-4ae1-a0c5-10c1ee3f0347-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"76bd9753-9395-4ae1-a0c5-10c1ee3f0347\") " pod="openstack/nova-metadata-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.074735 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/76bd9753-9395-4ae1-a0c5-10c1ee3f0347-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"76bd9753-9395-4ae1-a0c5-10c1ee3f0347\") " pod="openstack/nova-metadata-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.087565 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z82cp\" (UniqueName: \"kubernetes.io/projected/76bd9753-9395-4ae1-a0c5-10c1ee3f0347-kube-api-access-z82cp\") pod \"nova-metadata-0\" (UID: \"76bd9753-9395-4ae1-a0c5-10c1ee3f0347\") " pod="openstack/nova-metadata-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.255112 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.517909 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"367a1412-dad5-4592-9a00-1891f5c1812e","Type":"ContainerDied","Data":"b444828a65016a967ef8fdc17e4f93fb728d14497e8e77768e761e3bb2ebd992"} Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.518201 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.518225 4792 scope.go:117] "RemoveContainer" containerID="6901d26a6c5801da71ff50b23663661c66b242179007976b601bacebcfd5a925" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.522873 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"13580cf5-25de-4e01-9443-26e5cdc7a50b","Type":"ContainerDied","Data":"82ea7adedf73565c3e86f753fe4934605c5f3108d3b08cabb620cd857746901a"} Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.522977 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.566636 4792 scope.go:117] "RemoveContainer" containerID="8902abc3ed1ab61e7be06adb2517622e7995ee2d59e8b8f8739f30a52ee7f95e" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.568720 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.626321 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.632888 4792 scope.go:117] "RemoveContainer" containerID="1fba22a6ee53f985d6b1ec3a9058c768c3376907d7c7be9930c90c2f84bfb034" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.659907 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.661811 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.664228 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.665934 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.666045 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.696188 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.707434 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.719298 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.733134 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.745807 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.751755 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.754793 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.790455 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8qnh\" (UniqueName: \"kubernetes.io/projected/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-kube-api-access-p8qnh\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.790557 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e9c959-b479-4008-8042-ffa78bb38460-config-data\") pod \"nova-scheduler-0\" (UID: \"09e9c959-b479-4008-8042-ffa78bb38460\") " pod="openstack/nova-scheduler-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.790710 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-config-data\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.790756 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-logs\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.790794 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-675cp\" (UniqueName: \"kubernetes.io/projected/09e9c959-b479-4008-8042-ffa78bb38460-kube-api-access-675cp\") pod \"nova-scheduler-0\" (UID: \"09e9c959-b479-4008-8042-ffa78bb38460\") " pod="openstack/nova-scheduler-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.790857 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.790958 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-public-tls-certs\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.791081 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e9c959-b479-4008-8042-ffa78bb38460-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"09e9c959-b479-4008-8042-ffa78bb38460\") " pod="openstack/nova-scheduler-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.791143 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.852015 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.894049 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.895558 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8qnh\" (UniqueName: \"kubernetes.io/projected/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-kube-api-access-p8qnh\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.895684 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e9c959-b479-4008-8042-ffa78bb38460-config-data\") pod \"nova-scheduler-0\" (UID: \"09e9c959-b479-4008-8042-ffa78bb38460\") " pod="openstack/nova-scheduler-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.895854 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-config-data\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.895950 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-logs\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.896059 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-675cp\" (UniqueName: \"kubernetes.io/projected/09e9c959-b479-4008-8042-ffa78bb38460-kube-api-access-675cp\") pod \"nova-scheduler-0\" (UID: \"09e9c959-b479-4008-8042-ffa78bb38460\") " pod="openstack/nova-scheduler-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.896184 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.896310 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-public-tls-certs\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.896469 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e9c959-b479-4008-8042-ffa78bb38460-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"09e9c959-b479-4008-8042-ffa78bb38460\") " pod="openstack/nova-scheduler-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.897230 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-logs\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.904225 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e9c959-b479-4008-8042-ffa78bb38460-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"09e9c959-b479-4008-8042-ffa78bb38460\") " pod="openstack/nova-scheduler-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.909491 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-config-data\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.920255 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e9c959-b479-4008-8042-ffa78bb38460-config-data\") pod \"nova-scheduler-0\" (UID: \"09e9c959-b479-4008-8042-ffa78bb38460\") " pod="openstack/nova-scheduler-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.920317 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.920619 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-public-tls-certs\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.927176 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.932820 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-675cp\" (UniqueName: \"kubernetes.io/projected/09e9c959-b479-4008-8042-ffa78bb38460-kube-api-access-675cp\") pod \"nova-scheduler-0\" (UID: \"09e9c959-b479-4008-8042-ffa78bb38460\") " pod="openstack/nova-scheduler-0" Nov 27 17:34:33 crc kubenswrapper[4792]: I1127 17:34:33.938283 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8qnh\" (UniqueName: \"kubernetes.io/projected/7cd1499d-a3bb-449a-85d6-fcb81e3b43ee-kube-api-access-p8qnh\") pod \"nova-api-0\" (UID: \"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee\") " pod="openstack/nova-api-0" Nov 27 17:34:34 crc kubenswrapper[4792]: I1127 17:34:34.002700 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 27 17:34:34 crc kubenswrapper[4792]: I1127 17:34:34.133604 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 27 17:34:34 crc kubenswrapper[4792]: I1127 17:34:34.538158 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"76bd9753-9395-4ae1-a0c5-10c1ee3f0347","Type":"ContainerStarted","Data":"062713eac8b1879004b9169ee026ee4d318ba0bd9a98ca42fe39b8b900f8988c"} Nov 27 17:34:34 crc kubenswrapper[4792]: I1127 17:34:34.538685 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"76bd9753-9395-4ae1-a0c5-10c1ee3f0347","Type":"ContainerStarted","Data":"ccefee4047c1a4c669a6e126547820c148ffd974f69b1c6fe7f34d73dc85e2f3"} Nov 27 17:34:34 crc kubenswrapper[4792]: I1127 17:34:34.538696 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"76bd9753-9395-4ae1-a0c5-10c1ee3f0347","Type":"ContainerStarted","Data":"ea422d5b726db6f7a451c70cc3ea34a931167baf9fd01dc1db5f507ff71a215b"} Nov 27 17:34:34 crc kubenswrapper[4792]: I1127 17:34:34.580381 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 27 17:34:34 crc kubenswrapper[4792]: W1127 17:34:34.581332 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cd1499d_a3bb_449a_85d6_fcb81e3b43ee.slice/crio-06293bd9486db5273d63cee190921480bc713bd4a311cc518dfcb8d8242d2e1f WatchSource:0}: Error finding container 06293bd9486db5273d63cee190921480bc713bd4a311cc518dfcb8d8242d2e1f: Status 404 returned error can't find the container with id 06293bd9486db5273d63cee190921480bc713bd4a311cc518dfcb8d8242d2e1f Nov 27 17:34:34 crc kubenswrapper[4792]: I1127 17:34:34.587322 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.587288882 podStartE2EDuration="2.587288882s" podCreationTimestamp="2025-11-27 17:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:34:34.573889659 +0000 UTC m=+1496.916715977" watchObservedRunningTime="2025-11-27 17:34:34.587288882 +0000 UTC m=+1496.930115200" Nov 27 17:34:34 crc kubenswrapper[4792]: I1127 17:34:34.685616 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 27 17:34:34 crc kubenswrapper[4792]: I1127 17:34:34.704367 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13580cf5-25de-4e01-9443-26e5cdc7a50b" path="/var/lib/kubelet/pods/13580cf5-25de-4e01-9443-26e5cdc7a50b/volumes" Nov 27 17:34:34 crc kubenswrapper[4792]: I1127 17:34:34.705154 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="367a1412-dad5-4592-9a00-1891f5c1812e" path="/var/lib/kubelet/pods/367a1412-dad5-4592-9a00-1891f5c1812e/volumes" Nov 27 17:34:34 crc kubenswrapper[4792]: I1127 17:34:34.705920 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="906b1adb-a9c0-4ba1-94eb-80bf7f0aef26" path="/var/lib/kubelet/pods/906b1adb-a9c0-4ba1-94eb-80bf7f0aef26/volumes" Nov 27 17:34:35 crc kubenswrapper[4792]: I1127 17:34:35.557925 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09e9c959-b479-4008-8042-ffa78bb38460","Type":"ContainerStarted","Data":"930193501e2929dfb0c92c95e331d0d0eeb709f099bbe19cfa215c0818ac5bb2"} Nov 27 17:34:35 crc kubenswrapper[4792]: I1127 17:34:35.558267 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09e9c959-b479-4008-8042-ffa78bb38460","Type":"ContainerStarted","Data":"39777cd35001a7323ed1d6dbc8f7117409fb44021566617b4835dc9e2c7e815a"} Nov 27 17:34:35 crc kubenswrapper[4792]: I1127 17:34:35.565481 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee","Type":"ContainerStarted","Data":"30bd4be063c1d58b00cf30216ac56f6302cb235f6565540710992bb2325d4ac4"} Nov 27 17:34:35 crc kubenswrapper[4792]: I1127 17:34:35.565863 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee","Type":"ContainerStarted","Data":"39e6353192f903da3440a03aa213d9290c8aaa2bd777967b81a62dc155c5a674"} Nov 27 17:34:35 crc kubenswrapper[4792]: I1127 17:34:35.565963 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7cd1499d-a3bb-449a-85d6-fcb81e3b43ee","Type":"ContainerStarted","Data":"06293bd9486db5273d63cee190921480bc713bd4a311cc518dfcb8d8242d2e1f"} Nov 27 17:34:35 crc kubenswrapper[4792]: I1127 17:34:35.586467 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.586448293 podStartE2EDuration="2.586448293s" podCreationTimestamp="2025-11-27 17:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:34:35.573410268 +0000 UTC m=+1497.916236586" watchObservedRunningTime="2025-11-27 17:34:35.586448293 +0000 UTC m=+1497.929274611" Nov 27 17:34:35 crc kubenswrapper[4792]: I1127 17:34:35.603493 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.603467836 podStartE2EDuration="2.603467836s" podCreationTimestamp="2025-11-27 17:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:34:35.595633511 +0000 UTC m=+1497.938459959" watchObservedRunningTime="2025-11-27 17:34:35.603467836 +0000 UTC m=+1497.946294154" Nov 27 17:34:38 crc kubenswrapper[4792]: I1127 17:34:38.255383 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 27 17:34:38 crc kubenswrapper[4792]: I1127 17:34:38.256289 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 27 17:34:38 crc kubenswrapper[4792]: I1127 17:34:38.290922 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:34:38 crc kubenswrapper[4792]: I1127 17:34:38.291033 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:34:39 crc kubenswrapper[4792]: I1127 17:34:39.137154 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 27 17:34:41 crc kubenswrapper[4792]: I1127 17:34:41.984691 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9hh7f"] Nov 27 17:34:41 crc kubenswrapper[4792]: I1127 17:34:41.988120 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:34:42 crc kubenswrapper[4792]: I1127 17:34:42.022274 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9hh7f"] Nov 27 17:34:42 crc kubenswrapper[4792]: I1127 17:34:42.073224 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frm2f\" (UniqueName: \"kubernetes.io/projected/7180a433-86ea-495c-9fc8-22583bbabe14-kube-api-access-frm2f\") pod \"redhat-operators-9hh7f\" (UID: \"7180a433-86ea-495c-9fc8-22583bbabe14\") " pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:34:42 crc kubenswrapper[4792]: I1127 17:34:42.073382 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7180a433-86ea-495c-9fc8-22583bbabe14-catalog-content\") pod \"redhat-operators-9hh7f\" (UID: \"7180a433-86ea-495c-9fc8-22583bbabe14\") " pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:34:42 crc kubenswrapper[4792]: I1127 17:34:42.073435 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7180a433-86ea-495c-9fc8-22583bbabe14-utilities\") pod \"redhat-operators-9hh7f\" (UID: \"7180a433-86ea-495c-9fc8-22583bbabe14\") " pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:34:42 crc kubenswrapper[4792]: I1127 17:34:42.176027 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frm2f\" (UniqueName: \"kubernetes.io/projected/7180a433-86ea-495c-9fc8-22583bbabe14-kube-api-access-frm2f\") pod \"redhat-operators-9hh7f\" (UID: \"7180a433-86ea-495c-9fc8-22583bbabe14\") " pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:34:42 crc kubenswrapper[4792]: I1127 17:34:42.176272 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7180a433-86ea-495c-9fc8-22583bbabe14-catalog-content\") pod \"redhat-operators-9hh7f\" (UID: \"7180a433-86ea-495c-9fc8-22583bbabe14\") " pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:34:42 crc kubenswrapper[4792]: I1127 17:34:42.176413 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7180a433-86ea-495c-9fc8-22583bbabe14-utilities\") pod \"redhat-operators-9hh7f\" (UID: \"7180a433-86ea-495c-9fc8-22583bbabe14\") " pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:34:42 crc kubenswrapper[4792]: I1127 17:34:42.177202 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7180a433-86ea-495c-9fc8-22583bbabe14-catalog-content\") pod \"redhat-operators-9hh7f\" (UID: \"7180a433-86ea-495c-9fc8-22583bbabe14\") " pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:34:42 crc kubenswrapper[4792]: I1127 17:34:42.177328 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7180a433-86ea-495c-9fc8-22583bbabe14-utilities\") pod \"redhat-operators-9hh7f\" (UID: \"7180a433-86ea-495c-9fc8-22583bbabe14\") " pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:34:42 crc kubenswrapper[4792]: I1127 17:34:42.200513 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frm2f\" (UniqueName: \"kubernetes.io/projected/7180a433-86ea-495c-9fc8-22583bbabe14-kube-api-access-frm2f\") pod \"redhat-operators-9hh7f\" (UID: \"7180a433-86ea-495c-9fc8-22583bbabe14\") " pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:34:42 crc kubenswrapper[4792]: I1127 17:34:42.309692 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:34:42 crc kubenswrapper[4792]: I1127 17:34:42.789336 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9hh7f"] Nov 27 17:34:43 crc kubenswrapper[4792]: I1127 17:34:43.255845 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 27 17:34:43 crc kubenswrapper[4792]: I1127 17:34:43.256211 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 27 17:34:43 crc kubenswrapper[4792]: I1127 17:34:43.658934 4792 generic.go:334] "Generic (PLEG): container finished" podID="7180a433-86ea-495c-9fc8-22583bbabe14" containerID="30edc5599d69792c9d1f9bc021b1d4b7004e996fca31862f3dedd97e4c0a1135" exitCode=0 Nov 27 17:34:43 crc kubenswrapper[4792]: I1127 17:34:43.658982 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hh7f" event={"ID":"7180a433-86ea-495c-9fc8-22583bbabe14","Type":"ContainerDied","Data":"30edc5599d69792c9d1f9bc021b1d4b7004e996fca31862f3dedd97e4c0a1135"} Nov 27 17:34:43 crc kubenswrapper[4792]: I1127 17:34:43.659025 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hh7f" event={"ID":"7180a433-86ea-495c-9fc8-22583bbabe14","Type":"ContainerStarted","Data":"3f7dd1d0fd9c1383a468906e708330e7187ddfb3974f9783128ae2197ce4d0e1"} Nov 27 17:34:44 crc kubenswrapper[4792]: I1127 17:34:44.003205 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 27 17:34:44 crc kubenswrapper[4792]: I1127 17:34:44.005024 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 27 17:34:44 crc kubenswrapper[4792]: I1127 17:34:44.135122 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 27 17:34:44 crc kubenswrapper[4792]: I1127 17:34:44.169139 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 27 17:34:44 crc kubenswrapper[4792]: I1127 17:34:44.270998 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="76bd9753-9395-4ae1-a0c5-10c1ee3f0347" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.253:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 27 17:34:44 crc kubenswrapper[4792]: I1127 17:34:44.271039 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="76bd9753-9395-4ae1-a0c5-10c1ee3f0347" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.253:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 27 17:34:44 crc kubenswrapper[4792]: I1127 17:34:44.732568 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 27 17:34:45 crc kubenswrapper[4792]: I1127 17:34:45.016946 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7cd1499d-a3bb-449a-85d6-fcb81e3b43ee" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.254:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 27 17:34:45 crc kubenswrapper[4792]: I1127 17:34:45.016968 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7cd1499d-a3bb-449a-85d6-fcb81e3b43ee" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.254:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 27 17:34:45 crc kubenswrapper[4792]: I1127 17:34:45.700918 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hh7f" event={"ID":"7180a433-86ea-495c-9fc8-22583bbabe14","Type":"ContainerStarted","Data":"5c6ef065b9a83ca6048d65bb37a404e35e55d38760af42d3202fd7cb41ca0cd6"} Nov 27 17:34:47 crc kubenswrapper[4792]: E1127 17:34:47.441451 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46da3dec_b250_40ad_98d5_5c0e81cc9fb2.slice/crio-conmon-2ea7be85fbad41610b560e751963d445b6d002f21bc26d21416330fa5e7fe524.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46da3dec_b250_40ad_98d5_5c0e81cc9fb2.slice/crio-2ea7be85fbad41610b560e751963d445b6d002f21bc26d21416330fa5e7fe524.scope\": RecentStats: unable to find data in memory cache]" Nov 27 17:34:47 crc kubenswrapper[4792]: I1127 17:34:47.732008 4792 generic.go:334] "Generic (PLEG): container finished" podID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerID="2ea7be85fbad41610b560e751963d445b6d002f21bc26d21416330fa5e7fe524" exitCode=137 Nov 27 17:34:47 crc kubenswrapper[4792]: I1127 17:34:47.732319 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"46da3dec-b250-40ad-98d5-5c0e81cc9fb2","Type":"ContainerDied","Data":"2ea7be85fbad41610b560e751963d445b6d002f21bc26d21416330fa5e7fe524"} Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.150319 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.233088 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mf4h\" (UniqueName: \"kubernetes.io/projected/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-kube-api-access-7mf4h\") pod \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.233218 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-scripts\") pod \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.233273 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-config-data\") pod \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.233335 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-combined-ca-bundle\") pod \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\" (UID: \"46da3dec-b250-40ad-98d5-5c0e81cc9fb2\") " Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.243619 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-scripts" (OuterVolumeSpecName: "scripts") pod "46da3dec-b250-40ad-98d5-5c0e81cc9fb2" (UID: "46da3dec-b250-40ad-98d5-5c0e81cc9fb2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.244139 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-kube-api-access-7mf4h" (OuterVolumeSpecName: "kube-api-access-7mf4h") pod "46da3dec-b250-40ad-98d5-5c0e81cc9fb2" (UID: "46da3dec-b250-40ad-98d5-5c0e81cc9fb2"). InnerVolumeSpecName "kube-api-access-7mf4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.334982 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mf4h\" (UniqueName: \"kubernetes.io/projected/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-kube-api-access-7mf4h\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.335011 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.370305 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-config-data" (OuterVolumeSpecName: "config-data") pod "46da3dec-b250-40ad-98d5-5c0e81cc9fb2" (UID: "46da3dec-b250-40ad-98d5-5c0e81cc9fb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.373265 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46da3dec-b250-40ad-98d5-5c0e81cc9fb2" (UID: "46da3dec-b250-40ad-98d5-5c0e81cc9fb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.441818 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.441864 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46da3dec-b250-40ad-98d5-5c0e81cc9fb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.743259 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"46da3dec-b250-40ad-98d5-5c0e81cc9fb2","Type":"ContainerDied","Data":"275859698fd64211e920b1cda83d4bf2cedac266e6022c7192b1755d6253b453"} Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.743306 4792 scope.go:117] "RemoveContainer" containerID="2ea7be85fbad41610b560e751963d445b6d002f21bc26d21416330fa5e7fe524" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.743457 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.778296 4792 scope.go:117] "RemoveContainer" containerID="24913ca728c7008cd5e28962815cd3decdfd495bdaa8cf8ddf03e9404b9b08dd" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.796818 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.809893 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.828760 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 27 17:34:48 crc kubenswrapper[4792]: E1127 17:34:48.829182 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-api" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.829193 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-api" Nov 27 17:34:48 crc kubenswrapper[4792]: E1127 17:34:48.829207 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-listener" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.829212 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-listener" Nov 27 17:34:48 crc kubenswrapper[4792]: E1127 17:34:48.829233 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-evaluator" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.829240 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-evaluator" Nov 27 17:34:48 crc kubenswrapper[4792]: E1127 17:34:48.829258 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-notifier" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.829264 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-notifier" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.829478 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-notifier" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.829487 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-api" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.829507 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-evaluator" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.829529 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" containerName="aodh-listener" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.830348 4792 scope.go:117] "RemoveContainer" containerID="872db1a4db784269628c1c64cc0d40bc796087d52f82e17a5b059fd26b1435f9" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.831578 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.834297 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.835766 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.836106 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.836211 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-vlns7" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.836333 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.841211 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.886482 4792 scope.go:117] "RemoveContainer" containerID="b54e2cd8f75c9e2331dd373d6fa3791552e7ab84664d71b6e0c7a88eba36c43f" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.952479 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-combined-ca-bundle\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.952531 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-scripts\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.952572 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjh4r\" (UniqueName: \"kubernetes.io/projected/35f335bf-9584-4205-8bab-e1f8b83cf0db-kube-api-access-qjh4r\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.952604 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-config-data\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.952638 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-internal-tls-certs\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:48 crc kubenswrapper[4792]: I1127 17:34:48.952750 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-public-tls-certs\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:49 crc kubenswrapper[4792]: I1127 17:34:49.054356 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-scripts\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:49 crc kubenswrapper[4792]: I1127 17:34:49.054882 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjh4r\" (UniqueName: \"kubernetes.io/projected/35f335bf-9584-4205-8bab-e1f8b83cf0db-kube-api-access-qjh4r\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:49 crc kubenswrapper[4792]: I1127 17:34:49.054940 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-config-data\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:49 crc kubenswrapper[4792]: I1127 17:34:49.054992 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-internal-tls-certs\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:49 crc kubenswrapper[4792]: I1127 17:34:49.055202 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-public-tls-certs\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:49 crc kubenswrapper[4792]: I1127 17:34:49.055302 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-combined-ca-bundle\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:49 crc kubenswrapper[4792]: I1127 17:34:49.058445 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-config-data\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:49 crc kubenswrapper[4792]: I1127 17:34:49.058659 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-internal-tls-certs\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:49 crc kubenswrapper[4792]: I1127 17:34:49.058894 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-combined-ca-bundle\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:49 crc kubenswrapper[4792]: I1127 17:34:49.059052 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-public-tls-certs\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:49 crc kubenswrapper[4792]: I1127 17:34:49.074498 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-scripts\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:49 crc kubenswrapper[4792]: I1127 17:34:49.076748 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjh4r\" (UniqueName: \"kubernetes.io/projected/35f335bf-9584-4205-8bab-e1f8b83cf0db-kube-api-access-qjh4r\") pod \"aodh-0\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " pod="openstack/aodh-0" Nov 27 17:34:49 crc kubenswrapper[4792]: I1127 17:34:49.182083 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 27 17:34:49 crc kubenswrapper[4792]: I1127 17:34:49.686830 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 27 17:34:49 crc kubenswrapper[4792]: I1127 17:34:49.765968 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"35f335bf-9584-4205-8bab-e1f8b83cf0db","Type":"ContainerStarted","Data":"eac4d299465886c9a4b6fd69aa61c10b41028437614efc5836638ca0a92d87b2"} Nov 27 17:34:50 crc kubenswrapper[4792]: I1127 17:34:50.702845 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46da3dec-b250-40ad-98d5-5c0e81cc9fb2" path="/var/lib/kubelet/pods/46da3dec-b250-40ad-98d5-5c0e81cc9fb2/volumes" Nov 27 17:34:50 crc kubenswrapper[4792]: I1127 17:34:50.780559 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"35f335bf-9584-4205-8bab-e1f8b83cf0db","Type":"ContainerStarted","Data":"1ef7a1e5844ab1f9445bbb0c16c2610237ada53ba51bb79275db18f2b26c1e6b"} Nov 27 17:34:51 crc kubenswrapper[4792]: I1127 17:34:51.794326 4792 generic.go:334] "Generic (PLEG): container finished" podID="7180a433-86ea-495c-9fc8-22583bbabe14" containerID="5c6ef065b9a83ca6048d65bb37a404e35e55d38760af42d3202fd7cb41ca0cd6" exitCode=0 Nov 27 17:34:51 crc kubenswrapper[4792]: I1127 17:34:51.794451 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hh7f" event={"ID":"7180a433-86ea-495c-9fc8-22583bbabe14","Type":"ContainerDied","Data":"5c6ef065b9a83ca6048d65bb37a404e35e55d38760af42d3202fd7cb41ca0cd6"} Nov 27 17:34:52 crc kubenswrapper[4792]: I1127 17:34:52.811258 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"35f335bf-9584-4205-8bab-e1f8b83cf0db","Type":"ContainerStarted","Data":"3c10aa265ef8396c4e48cb4df30d2f4dc6d22f609b5c501b8cbe352be9772642"} Nov 27 17:34:52 crc kubenswrapper[4792]: I1127 17:34:52.817933 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hh7f" event={"ID":"7180a433-86ea-495c-9fc8-22583bbabe14","Type":"ContainerStarted","Data":"2bb60246e978fad2e7cec5089a4422168fd90114e251018d4a6117100fed7b5e"} Nov 27 17:34:52 crc kubenswrapper[4792]: I1127 17:34:52.848630 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9hh7f" podStartSLOduration=3.159691952 podStartE2EDuration="11.84861133s" podCreationTimestamp="2025-11-27 17:34:41 +0000 UTC" firstStartedPulling="2025-11-27 17:34:43.661286666 +0000 UTC m=+1506.004112984" lastFinishedPulling="2025-11-27 17:34:52.350206044 +0000 UTC m=+1514.693032362" observedRunningTime="2025-11-27 17:34:52.837466323 +0000 UTC m=+1515.180292641" watchObservedRunningTime="2025-11-27 17:34:52.84861133 +0000 UTC m=+1515.191437648" Nov 27 17:34:53 crc kubenswrapper[4792]: I1127 17:34:53.263529 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 27 17:34:53 crc kubenswrapper[4792]: I1127 17:34:53.264975 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 27 17:34:53 crc kubenswrapper[4792]: I1127 17:34:53.268848 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 27 17:34:53 crc kubenswrapper[4792]: I1127 17:34:53.834675 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"35f335bf-9584-4205-8bab-e1f8b83cf0db","Type":"ContainerStarted","Data":"ac9fb9e5908311fe5a378f2c7d163c8bb3114982c12e158c33c1507e8ca479fd"} Nov 27 17:34:53 crc kubenswrapper[4792]: I1127 17:34:53.835050 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"35f335bf-9584-4205-8bab-e1f8b83cf0db","Type":"ContainerStarted","Data":"ad22d44a6d8beceebcfcb49ddb6cd0c68d4477b691402937500ce834132dab60"} Nov 27 17:34:53 crc kubenswrapper[4792]: I1127 17:34:53.849211 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 27 17:34:53 crc kubenswrapper[4792]: I1127 17:34:53.866332 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.174446357 podStartE2EDuration="5.866314132s" podCreationTimestamp="2025-11-27 17:34:48 +0000 UTC" firstStartedPulling="2025-11-27 17:34:49.693477335 +0000 UTC m=+1512.036303653" lastFinishedPulling="2025-11-27 17:34:53.38534511 +0000 UTC m=+1515.728171428" observedRunningTime="2025-11-27 17:34:53.864900507 +0000 UTC m=+1516.207726825" watchObservedRunningTime="2025-11-27 17:34:53.866314132 +0000 UTC m=+1516.209140440" Nov 27 17:34:54 crc kubenswrapper[4792]: I1127 17:34:54.021990 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 27 17:34:54 crc kubenswrapper[4792]: I1127 17:34:54.022564 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 27 17:34:54 crc kubenswrapper[4792]: I1127 17:34:54.028230 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 27 17:34:54 crc kubenswrapper[4792]: I1127 17:34:54.034116 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 27 17:34:54 crc kubenswrapper[4792]: I1127 17:34:54.852441 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 27 17:34:54 crc kubenswrapper[4792]: I1127 17:34:54.903512 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 27 17:34:57 crc kubenswrapper[4792]: I1127 17:34:57.069374 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 27 17:35:01 crc kubenswrapper[4792]: I1127 17:35:01.372963 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 27 17:35:01 crc kubenswrapper[4792]: I1127 17:35:01.374467 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="1ba0cf2f-cd7d-4133-9746-61abf95e4420" containerName="kube-state-metrics" containerID="cri-o://ee77d21d98990c8ef21b29587f255577c0c94058f424026a0c7ec8fd34c2522a" gracePeriod=30 Nov 27 17:35:01 crc kubenswrapper[4792]: I1127 17:35:01.515615 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 27 17:35:01 crc kubenswrapper[4792]: I1127 17:35:01.516082 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5" containerName="mysqld-exporter" containerID="cri-o://9691adb05de04852dd6a5022c34491f5af95bd0040d089920e35ed5b2e6e0b31" gracePeriod=30 Nov 27 17:35:01 crc kubenswrapper[4792]: I1127 17:35:01.935946 4792 generic.go:334] "Generic (PLEG): container finished" podID="1ba0cf2f-cd7d-4133-9746-61abf95e4420" containerID="ee77d21d98990c8ef21b29587f255577c0c94058f424026a0c7ec8fd34c2522a" exitCode=2 Nov 27 17:35:01 crc kubenswrapper[4792]: I1127 17:35:01.936255 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1ba0cf2f-cd7d-4133-9746-61abf95e4420","Type":"ContainerDied","Data":"ee77d21d98990c8ef21b29587f255577c0c94058f424026a0c7ec8fd34c2522a"} Nov 27 17:35:01 crc kubenswrapper[4792]: I1127 17:35:01.936587 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1ba0cf2f-cd7d-4133-9746-61abf95e4420","Type":"ContainerDied","Data":"bbcd885c50230f2bc98df687efbed6a210526e07f0853f3033e2512948561c50"} Nov 27 17:35:01 crc kubenswrapper[4792]: I1127 17:35:01.936623 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbcd885c50230f2bc98df687efbed6a210526e07f0853f3033e2512948561c50" Nov 27 17:35:01 crc kubenswrapper[4792]: I1127 17:35:01.941671 4792 generic.go:334] "Generic (PLEG): container finished" podID="0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5" containerID="9691adb05de04852dd6a5022c34491f5af95bd0040d089920e35ed5b2e6e0b31" exitCode=2 Nov 27 17:35:01 crc kubenswrapper[4792]: I1127 17:35:01.941716 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5","Type":"ContainerDied","Data":"9691adb05de04852dd6a5022c34491f5af95bd0040d089920e35ed5b2e6e0b31"} Nov 27 17:35:01 crc kubenswrapper[4792]: I1127 17:35:01.958252 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.024283 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kklf6\" (UniqueName: \"kubernetes.io/projected/1ba0cf2f-cd7d-4133-9746-61abf95e4420-kube-api-access-kklf6\") pod \"1ba0cf2f-cd7d-4133-9746-61abf95e4420\" (UID: \"1ba0cf2f-cd7d-4133-9746-61abf95e4420\") " Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.030278 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba0cf2f-cd7d-4133-9746-61abf95e4420-kube-api-access-kklf6" (OuterVolumeSpecName: "kube-api-access-kklf6") pod "1ba0cf2f-cd7d-4133-9746-61abf95e4420" (UID: "1ba0cf2f-cd7d-4133-9746-61abf95e4420"). InnerVolumeSpecName "kube-api-access-kklf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.062201 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.126533 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkfcn\" (UniqueName: \"kubernetes.io/projected/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-kube-api-access-rkfcn\") pod \"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5\" (UID: \"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5\") " Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.126830 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-combined-ca-bundle\") pod \"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5\" (UID: \"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5\") " Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.127052 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-config-data\") pod \"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5\" (UID: \"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5\") " Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.127812 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kklf6\" (UniqueName: \"kubernetes.io/projected/1ba0cf2f-cd7d-4133-9746-61abf95e4420-kube-api-access-kklf6\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.131053 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-kube-api-access-rkfcn" (OuterVolumeSpecName: "kube-api-access-rkfcn") pod "0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5" (UID: "0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5"). InnerVolumeSpecName "kube-api-access-rkfcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.156510 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5" (UID: "0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.197258 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-config-data" (OuterVolumeSpecName: "config-data") pod "0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5" (UID: "0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.229589 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.229616 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkfcn\" (UniqueName: \"kubernetes.io/projected/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-kube-api-access-rkfcn\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.229625 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.310761 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.310927 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.954100 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.954222 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5","Type":"ContainerDied","Data":"3140ebf73b816a6cba43394ba0f8afc9aaae3802018332f739510cd583a42aee"} Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.954623 4792 scope.go:117] "RemoveContainer" containerID="9691adb05de04852dd6a5022c34491f5af95bd0040d089920e35ed5b2e6e0b31" Nov 27 17:35:02 crc kubenswrapper[4792]: I1127 17:35:02.954314 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.013866 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.038764 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.038807 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 27 17:35:03 crc kubenswrapper[4792]: E1127 17:35:03.039218 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba0cf2f-cd7d-4133-9746-61abf95e4420" containerName="kube-state-metrics" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.039228 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba0cf2f-cd7d-4133-9746-61abf95e4420" containerName="kube-state-metrics" Nov 27 17:35:03 crc kubenswrapper[4792]: E1127 17:35:03.039279 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5" containerName="mysqld-exporter" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.039286 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5" containerName="mysqld-exporter" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.039482 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba0cf2f-cd7d-4133-9746-61abf95e4420" containerName="kube-state-metrics" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.039512 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5" containerName="mysqld-exporter" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.040252 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.056476 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.065701 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.099207 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.151852 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.198819 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.203418 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0248f4d6-3146-4bb3-85d8-03cdfb42238a-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"0248f4d6-3146-4bb3-85d8-03cdfb42238a\") " pod="openstack/mysqld-exporter-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.203511 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvctx\" (UniqueName: \"kubernetes.io/projected/0248f4d6-3146-4bb3-85d8-03cdfb42238a-kube-api-access-wvctx\") pod \"mysqld-exporter-0\" (UID: \"0248f4d6-3146-4bb3-85d8-03cdfb42238a\") " pod="openstack/mysqld-exporter-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.203636 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/0248f4d6-3146-4bb3-85d8-03cdfb42238a-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"0248f4d6-3146-4bb3-85d8-03cdfb42238a\") " pod="openstack/mysqld-exporter-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.203790 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0248f4d6-3146-4bb3-85d8-03cdfb42238a-config-data\") pod \"mysqld-exporter-0\" (UID: \"0248f4d6-3146-4bb3-85d8-03cdfb42238a\") " pod="openstack/mysqld-exporter-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.226334 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.228300 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.231318 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.231428 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.239885 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.305905 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvctx\" (UniqueName: \"kubernetes.io/projected/0248f4d6-3146-4bb3-85d8-03cdfb42238a-kube-api-access-wvctx\") pod \"mysqld-exporter-0\" (UID: \"0248f4d6-3146-4bb3-85d8-03cdfb42238a\") " pod="openstack/mysqld-exporter-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.306055 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/0248f4d6-3146-4bb3-85d8-03cdfb42238a-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"0248f4d6-3146-4bb3-85d8-03cdfb42238a\") " pod="openstack/mysqld-exporter-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.306136 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0248f4d6-3146-4bb3-85d8-03cdfb42238a-config-data\") pod \"mysqld-exporter-0\" (UID: \"0248f4d6-3146-4bb3-85d8-03cdfb42238a\") " pod="openstack/mysqld-exporter-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.306250 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0248f4d6-3146-4bb3-85d8-03cdfb42238a-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"0248f4d6-3146-4bb3-85d8-03cdfb42238a\") " pod="openstack/mysqld-exporter-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.312440 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0248f4d6-3146-4bb3-85d8-03cdfb42238a-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"0248f4d6-3146-4bb3-85d8-03cdfb42238a\") " pod="openstack/mysqld-exporter-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.312764 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/0248f4d6-3146-4bb3-85d8-03cdfb42238a-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"0248f4d6-3146-4bb3-85d8-03cdfb42238a\") " pod="openstack/mysqld-exporter-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.313024 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0248f4d6-3146-4bb3-85d8-03cdfb42238a-config-data\") pod \"mysqld-exporter-0\" (UID: \"0248f4d6-3146-4bb3-85d8-03cdfb42238a\") " pod="openstack/mysqld-exporter-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.323528 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvctx\" (UniqueName: \"kubernetes.io/projected/0248f4d6-3146-4bb3-85d8-03cdfb42238a-kube-api-access-wvctx\") pod \"mysqld-exporter-0\" (UID: \"0248f4d6-3146-4bb3-85d8-03cdfb42238a\") " pod="openstack/mysqld-exporter-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.370992 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9hh7f" podUID="7180a433-86ea-495c-9fc8-22583bbabe14" containerName="registry-server" probeResult="failure" output=< Nov 27 17:35:03 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 17:35:03 crc kubenswrapper[4792]: > Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.392574 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.408110 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfcff168-fa89-462b-a1e2-8422c13e0ab3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dfcff168-fa89-462b-a1e2-8422c13e0ab3\") " pod="openstack/kube-state-metrics-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.408296 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfcff168-fa89-462b-a1e2-8422c13e0ab3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dfcff168-fa89-462b-a1e2-8422c13e0ab3\") " pod="openstack/kube-state-metrics-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.408477 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27ldg\" (UniqueName: \"kubernetes.io/projected/dfcff168-fa89-462b-a1e2-8422c13e0ab3-kube-api-access-27ldg\") pod \"kube-state-metrics-0\" (UID: \"dfcff168-fa89-462b-a1e2-8422c13e0ab3\") " pod="openstack/kube-state-metrics-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.408545 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dfcff168-fa89-462b-a1e2-8422c13e0ab3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dfcff168-fa89-462b-a1e2-8422c13e0ab3\") " pod="openstack/kube-state-metrics-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.510951 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfcff168-fa89-462b-a1e2-8422c13e0ab3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dfcff168-fa89-462b-a1e2-8422c13e0ab3\") " pod="openstack/kube-state-metrics-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.512666 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfcff168-fa89-462b-a1e2-8422c13e0ab3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dfcff168-fa89-462b-a1e2-8422c13e0ab3\") " pod="openstack/kube-state-metrics-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.512806 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27ldg\" (UniqueName: \"kubernetes.io/projected/dfcff168-fa89-462b-a1e2-8422c13e0ab3-kube-api-access-27ldg\") pod \"kube-state-metrics-0\" (UID: \"dfcff168-fa89-462b-a1e2-8422c13e0ab3\") " pod="openstack/kube-state-metrics-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.512872 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dfcff168-fa89-462b-a1e2-8422c13e0ab3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dfcff168-fa89-462b-a1e2-8422c13e0ab3\") " pod="openstack/kube-state-metrics-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.518326 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/dfcff168-fa89-462b-a1e2-8422c13e0ab3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"dfcff168-fa89-462b-a1e2-8422c13e0ab3\") " pod="openstack/kube-state-metrics-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.523858 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfcff168-fa89-462b-a1e2-8422c13e0ab3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"dfcff168-fa89-462b-a1e2-8422c13e0ab3\") " pod="openstack/kube-state-metrics-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.524289 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfcff168-fa89-462b-a1e2-8422c13e0ab3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"dfcff168-fa89-462b-a1e2-8422c13e0ab3\") " pod="openstack/kube-state-metrics-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.546743 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27ldg\" (UniqueName: \"kubernetes.io/projected/dfcff168-fa89-462b-a1e2-8422c13e0ab3-kube-api-access-27ldg\") pod \"kube-state-metrics-0\" (UID: \"dfcff168-fa89-462b-a1e2-8422c13e0ab3\") " pod="openstack/kube-state-metrics-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.552376 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.868466 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.869082 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="ceilometer-central-agent" containerID="cri-o://d690627b211d38f6a1490fa2cefbdf50b22589a00987ffebdf70d077634a52e8" gracePeriod=30 Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.869640 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="proxy-httpd" containerID="cri-o://4a7eb608905037b5b90675493d2aa727bf1c1ba2202f0ed70eb48ffc3135e976" gracePeriod=30 Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.869962 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="sg-core" containerID="cri-o://b3c4ea63a9c1a0ee3cf71613db436fe91a5c2f2f77083ab5bb2c59479d4d0bb9" gracePeriod=30 Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.870009 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="ceilometer-notification-agent" containerID="cri-o://d223da1b712ea4487be67cad355226b064ddecdce7d20a3e1b0c520f94a1abb6" gracePeriod=30 Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.935555 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.938317 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 17:35:03 crc kubenswrapper[4792]: I1127 17:35:03.975498 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0248f4d6-3146-4bb3-85d8-03cdfb42238a","Type":"ContainerStarted","Data":"343b3538a883fb66c378f44652ab9eeb208023163f68bba878724d5d109a366e"} Nov 27 17:35:04 crc kubenswrapper[4792]: I1127 17:35:04.125948 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 27 17:35:04 crc kubenswrapper[4792]: I1127 17:35:04.703556 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5" path="/var/lib/kubelet/pods/0a89ac26-e7c6-4cac-b67e-7e7cdfd8c7c5/volumes" Nov 27 17:35:04 crc kubenswrapper[4792]: I1127 17:35:04.704756 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ba0cf2f-cd7d-4133-9746-61abf95e4420" path="/var/lib/kubelet/pods/1ba0cf2f-cd7d-4133-9746-61abf95e4420/volumes" Nov 27 17:35:04 crc kubenswrapper[4792]: I1127 17:35:04.991852 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dfcff168-fa89-462b-a1e2-8422c13e0ab3","Type":"ContainerStarted","Data":"8f47a6978d8b5e6f7bb659281639f3759a53189ff07acb501965551cfb9804d6"} Nov 27 17:35:04 crc kubenswrapper[4792]: I1127 17:35:04.997138 4792 generic.go:334] "Generic (PLEG): container finished" podID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerID="4a7eb608905037b5b90675493d2aa727bf1c1ba2202f0ed70eb48ffc3135e976" exitCode=0 Nov 27 17:35:04 crc kubenswrapper[4792]: I1127 17:35:04.997178 4792 generic.go:334] "Generic (PLEG): container finished" podID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerID="b3c4ea63a9c1a0ee3cf71613db436fe91a5c2f2f77083ab5bb2c59479d4d0bb9" exitCode=2 Nov 27 17:35:04 crc kubenswrapper[4792]: I1127 17:35:04.997190 4792 generic.go:334] "Generic (PLEG): container finished" podID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerID="d690627b211d38f6a1490fa2cefbdf50b22589a00987ffebdf70d077634a52e8" exitCode=0 Nov 27 17:35:04 crc kubenswrapper[4792]: I1127 17:35:04.997228 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce94cd81-3f45-4307-8bbb-6343969d65d7","Type":"ContainerDied","Data":"4a7eb608905037b5b90675493d2aa727bf1c1ba2202f0ed70eb48ffc3135e976"} Nov 27 17:35:04 crc kubenswrapper[4792]: I1127 17:35:04.997263 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce94cd81-3f45-4307-8bbb-6343969d65d7","Type":"ContainerDied","Data":"b3c4ea63a9c1a0ee3cf71613db436fe91a5c2f2f77083ab5bb2c59479d4d0bb9"} Nov 27 17:35:04 crc kubenswrapper[4792]: I1127 17:35:04.997278 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce94cd81-3f45-4307-8bbb-6343969d65d7","Type":"ContainerDied","Data":"d690627b211d38f6a1490fa2cefbdf50b22589a00987ffebdf70d077634a52e8"} Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.013153 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"dfcff168-fa89-462b-a1e2-8422c13e0ab3","Type":"ContainerStarted","Data":"e7362ad10e5361a2ef15acb0c6849c79fecdd39ef0e17d589756518809983ce3"} Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.013540 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.016609 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"0248f4d6-3146-4bb3-85d8-03cdfb42238a","Type":"ContainerStarted","Data":"0468f3d3a518b57d257b553a4455632362025e65cc7f3d43fca8458cc287fa6b"} Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.056905 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.690261466 podStartE2EDuration="3.056876161s" podCreationTimestamp="2025-11-27 17:35:03 +0000 UTC" firstStartedPulling="2025-11-27 17:35:04.119064517 +0000 UTC m=+1526.461890835" lastFinishedPulling="2025-11-27 17:35:04.485679212 +0000 UTC m=+1526.828505530" observedRunningTime="2025-11-27 17:35:06.040518275 +0000 UTC m=+1528.383344603" watchObservedRunningTime="2025-11-27 17:35:06.056876161 +0000 UTC m=+1528.399702479" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.089275 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.33852286 podStartE2EDuration="4.089252977s" podCreationTimestamp="2025-11-27 17:35:02 +0000 UTC" firstStartedPulling="2025-11-27 17:35:03.938056571 +0000 UTC m=+1526.280882889" lastFinishedPulling="2025-11-27 17:35:04.688786688 +0000 UTC m=+1527.031613006" observedRunningTime="2025-11-27 17:35:06.065165847 +0000 UTC m=+1528.407992185" watchObservedRunningTime="2025-11-27 17:35:06.089252977 +0000 UTC m=+1528.432079295" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.525956 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.714097 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce94cd81-3f45-4307-8bbb-6343969d65d7-run-httpd\") pod \"ce94cd81-3f45-4307-8bbb-6343969d65d7\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.714235 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-config-data\") pod \"ce94cd81-3f45-4307-8bbb-6343969d65d7\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.714266 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-combined-ca-bundle\") pod \"ce94cd81-3f45-4307-8bbb-6343969d65d7\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.714440 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-scripts\") pod \"ce94cd81-3f45-4307-8bbb-6343969d65d7\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.714517 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pblk\" (UniqueName: \"kubernetes.io/projected/ce94cd81-3f45-4307-8bbb-6343969d65d7-kube-api-access-6pblk\") pod \"ce94cd81-3f45-4307-8bbb-6343969d65d7\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.714572 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce94cd81-3f45-4307-8bbb-6343969d65d7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ce94cd81-3f45-4307-8bbb-6343969d65d7" (UID: "ce94cd81-3f45-4307-8bbb-6343969d65d7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.714584 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce94cd81-3f45-4307-8bbb-6343969d65d7-log-httpd\") pod \"ce94cd81-3f45-4307-8bbb-6343969d65d7\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.714674 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-sg-core-conf-yaml\") pod \"ce94cd81-3f45-4307-8bbb-6343969d65d7\" (UID: \"ce94cd81-3f45-4307-8bbb-6343969d65d7\") " Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.715467 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce94cd81-3f45-4307-8bbb-6343969d65d7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ce94cd81-3f45-4307-8bbb-6343969d65d7" (UID: "ce94cd81-3f45-4307-8bbb-6343969d65d7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.715730 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce94cd81-3f45-4307-8bbb-6343969d65d7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.715755 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce94cd81-3f45-4307-8bbb-6343969d65d7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.735899 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-scripts" (OuterVolumeSpecName: "scripts") pod "ce94cd81-3f45-4307-8bbb-6343969d65d7" (UID: "ce94cd81-3f45-4307-8bbb-6343969d65d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.735949 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce94cd81-3f45-4307-8bbb-6343969d65d7-kube-api-access-6pblk" (OuterVolumeSpecName: "kube-api-access-6pblk") pod "ce94cd81-3f45-4307-8bbb-6343969d65d7" (UID: "ce94cd81-3f45-4307-8bbb-6343969d65d7"). InnerVolumeSpecName "kube-api-access-6pblk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.775032 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ce94cd81-3f45-4307-8bbb-6343969d65d7" (UID: "ce94cd81-3f45-4307-8bbb-6343969d65d7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.817542 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.817577 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pblk\" (UniqueName: \"kubernetes.io/projected/ce94cd81-3f45-4307-8bbb-6343969d65d7-kube-api-access-6pblk\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.817587 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.840695 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce94cd81-3f45-4307-8bbb-6343969d65d7" (UID: "ce94cd81-3f45-4307-8bbb-6343969d65d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.858150 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-config-data" (OuterVolumeSpecName: "config-data") pod "ce94cd81-3f45-4307-8bbb-6343969d65d7" (UID: "ce94cd81-3f45-4307-8bbb-6343969d65d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.919669 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:06 crc kubenswrapper[4792]: I1127 17:35:06.920053 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce94cd81-3f45-4307-8bbb-6343969d65d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.032168 4792 generic.go:334] "Generic (PLEG): container finished" podID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerID="d223da1b712ea4487be67cad355226b064ddecdce7d20a3e1b0c520f94a1abb6" exitCode=0 Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.032255 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce94cd81-3f45-4307-8bbb-6343969d65d7","Type":"ContainerDied","Data":"d223da1b712ea4487be67cad355226b064ddecdce7d20a3e1b0c520f94a1abb6"} Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.032310 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce94cd81-3f45-4307-8bbb-6343969d65d7","Type":"ContainerDied","Data":"2de34c0c3163edf2760e07eae17c6b955b862ed2eb570db40ca10317b396c65a"} Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.032334 4792 scope.go:117] "RemoveContainer" containerID="4a7eb608905037b5b90675493d2aa727bf1c1ba2202f0ed70eb48ffc3135e976" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.032276 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.063304 4792 scope.go:117] "RemoveContainer" containerID="b3c4ea63a9c1a0ee3cf71613db436fe91a5c2f2f77083ab5bb2c59479d4d0bb9" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.068966 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.079977 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.112013 4792 scope.go:117] "RemoveContainer" containerID="d223da1b712ea4487be67cad355226b064ddecdce7d20a3e1b0c520f94a1abb6" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.134980 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:35:07 crc kubenswrapper[4792]: E1127 17:35:07.135461 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="ceilometer-central-agent" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.135480 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="ceilometer-central-agent" Nov 27 17:35:07 crc kubenswrapper[4792]: E1127 17:35:07.135509 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="ceilometer-notification-agent" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.135515 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="ceilometer-notification-agent" Nov 27 17:35:07 crc kubenswrapper[4792]: E1127 17:35:07.135530 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="proxy-httpd" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.135536 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="proxy-httpd" Nov 27 17:35:07 crc kubenswrapper[4792]: E1127 17:35:07.135560 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="sg-core" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.135566 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="sg-core" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.135791 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="proxy-httpd" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.135853 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="ceilometer-notification-agent" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.135871 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="ceilometer-central-agent" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.135882 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" containerName="sg-core" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.137996 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.140459 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.140517 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.141873 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.174528 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.179849 4792 scope.go:117] "RemoveContainer" containerID="d690627b211d38f6a1490fa2cefbdf50b22589a00987ffebdf70d077634a52e8" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.209959 4792 scope.go:117] "RemoveContainer" containerID="4a7eb608905037b5b90675493d2aa727bf1c1ba2202f0ed70eb48ffc3135e976" Nov 27 17:35:07 crc kubenswrapper[4792]: E1127 17:35:07.210429 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a7eb608905037b5b90675493d2aa727bf1c1ba2202f0ed70eb48ffc3135e976\": container with ID starting with 4a7eb608905037b5b90675493d2aa727bf1c1ba2202f0ed70eb48ffc3135e976 not found: ID does not exist" containerID="4a7eb608905037b5b90675493d2aa727bf1c1ba2202f0ed70eb48ffc3135e976" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.210476 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a7eb608905037b5b90675493d2aa727bf1c1ba2202f0ed70eb48ffc3135e976"} err="failed to get container status \"4a7eb608905037b5b90675493d2aa727bf1c1ba2202f0ed70eb48ffc3135e976\": rpc error: code = NotFound desc = could not find container \"4a7eb608905037b5b90675493d2aa727bf1c1ba2202f0ed70eb48ffc3135e976\": container with ID starting with 4a7eb608905037b5b90675493d2aa727bf1c1ba2202f0ed70eb48ffc3135e976 not found: ID does not exist" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.210502 4792 scope.go:117] "RemoveContainer" containerID="b3c4ea63a9c1a0ee3cf71613db436fe91a5c2f2f77083ab5bb2c59479d4d0bb9" Nov 27 17:35:07 crc kubenswrapper[4792]: E1127 17:35:07.210901 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3c4ea63a9c1a0ee3cf71613db436fe91a5c2f2f77083ab5bb2c59479d4d0bb9\": container with ID starting with b3c4ea63a9c1a0ee3cf71613db436fe91a5c2f2f77083ab5bb2c59479d4d0bb9 not found: ID does not exist" containerID="b3c4ea63a9c1a0ee3cf71613db436fe91a5c2f2f77083ab5bb2c59479d4d0bb9" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.210922 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3c4ea63a9c1a0ee3cf71613db436fe91a5c2f2f77083ab5bb2c59479d4d0bb9"} err="failed to get container status \"b3c4ea63a9c1a0ee3cf71613db436fe91a5c2f2f77083ab5bb2c59479d4d0bb9\": rpc error: code = NotFound desc = could not find container \"b3c4ea63a9c1a0ee3cf71613db436fe91a5c2f2f77083ab5bb2c59479d4d0bb9\": container with ID starting with b3c4ea63a9c1a0ee3cf71613db436fe91a5c2f2f77083ab5bb2c59479d4d0bb9 not found: ID does not exist" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.210936 4792 scope.go:117] "RemoveContainer" containerID="d223da1b712ea4487be67cad355226b064ddecdce7d20a3e1b0c520f94a1abb6" Nov 27 17:35:07 crc kubenswrapper[4792]: E1127 17:35:07.211157 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d223da1b712ea4487be67cad355226b064ddecdce7d20a3e1b0c520f94a1abb6\": container with ID starting with d223da1b712ea4487be67cad355226b064ddecdce7d20a3e1b0c520f94a1abb6 not found: ID does not exist" containerID="d223da1b712ea4487be67cad355226b064ddecdce7d20a3e1b0c520f94a1abb6" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.211190 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d223da1b712ea4487be67cad355226b064ddecdce7d20a3e1b0c520f94a1abb6"} err="failed to get container status \"d223da1b712ea4487be67cad355226b064ddecdce7d20a3e1b0c520f94a1abb6\": rpc error: code = NotFound desc = could not find container \"d223da1b712ea4487be67cad355226b064ddecdce7d20a3e1b0c520f94a1abb6\": container with ID starting with d223da1b712ea4487be67cad355226b064ddecdce7d20a3e1b0c520f94a1abb6 not found: ID does not exist" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.211213 4792 scope.go:117] "RemoveContainer" containerID="d690627b211d38f6a1490fa2cefbdf50b22589a00987ffebdf70d077634a52e8" Nov 27 17:35:07 crc kubenswrapper[4792]: E1127 17:35:07.211619 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d690627b211d38f6a1490fa2cefbdf50b22589a00987ffebdf70d077634a52e8\": container with ID starting with d690627b211d38f6a1490fa2cefbdf50b22589a00987ffebdf70d077634a52e8 not found: ID does not exist" containerID="d690627b211d38f6a1490fa2cefbdf50b22589a00987ffebdf70d077634a52e8" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.211657 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d690627b211d38f6a1490fa2cefbdf50b22589a00987ffebdf70d077634a52e8"} err="failed to get container status \"d690627b211d38f6a1490fa2cefbdf50b22589a00987ffebdf70d077634a52e8\": rpc error: code = NotFound desc = could not find container \"d690627b211d38f6a1490fa2cefbdf50b22589a00987ffebdf70d077634a52e8\": container with ID starting with d690627b211d38f6a1490fa2cefbdf50b22589a00987ffebdf70d077634a52e8 not found: ID does not exist" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.330774 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9278b907-83d9-463e-9fd9-41d227ad834d-log-httpd\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.330924 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.331037 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.331249 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.331284 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-config-data\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.331358 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9278b907-83d9-463e-9fd9-41d227ad834d-run-httpd\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.331379 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55pqh\" (UniqueName: \"kubernetes.io/projected/9278b907-83d9-463e-9fd9-41d227ad834d-kube-api-access-55pqh\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.331448 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-scripts\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.433351 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.433411 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.433499 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.433524 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-config-data\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.433569 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55pqh\" (UniqueName: \"kubernetes.io/projected/9278b907-83d9-463e-9fd9-41d227ad834d-kube-api-access-55pqh\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.433598 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9278b907-83d9-463e-9fd9-41d227ad834d-run-httpd\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.433655 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-scripts\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.433727 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9278b907-83d9-463e-9fd9-41d227ad834d-log-httpd\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.434385 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9278b907-83d9-463e-9fd9-41d227ad834d-log-httpd\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.434460 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9278b907-83d9-463e-9fd9-41d227ad834d-run-httpd\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.437555 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.439723 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.439740 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.439788 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-config-data\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.452068 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-scripts\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.455538 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55pqh\" (UniqueName: \"kubernetes.io/projected/9278b907-83d9-463e-9fd9-41d227ad834d-kube-api-access-55pqh\") pod \"ceilometer-0\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.457629 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:35:07 crc kubenswrapper[4792]: I1127 17:35:07.942509 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:35:08 crc kubenswrapper[4792]: I1127 17:35:08.053272 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9278b907-83d9-463e-9fd9-41d227ad834d","Type":"ContainerStarted","Data":"5438cc281b25fc15d584599f2180f7e13bfeea647f9167d7f10259d41dcc0737"} Nov 27 17:35:08 crc kubenswrapper[4792]: I1127 17:35:08.291199 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:35:08 crc kubenswrapper[4792]: I1127 17:35:08.291582 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:35:08 crc kubenswrapper[4792]: I1127 17:35:08.291672 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:35:08 crc kubenswrapper[4792]: I1127 17:35:08.292977 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 17:35:08 crc kubenswrapper[4792]: I1127 17:35:08.293079 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" gracePeriod=600 Nov 27 17:35:08 crc kubenswrapper[4792]: E1127 17:35:08.420482 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:35:08 crc kubenswrapper[4792]: I1127 17:35:08.699552 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce94cd81-3f45-4307-8bbb-6343969d65d7" path="/var/lib/kubelet/pods/ce94cd81-3f45-4307-8bbb-6343969d65d7/volumes" Nov 27 17:35:09 crc kubenswrapper[4792]: I1127 17:35:09.070099 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9278b907-83d9-463e-9fd9-41d227ad834d","Type":"ContainerStarted","Data":"0dcb4012334df0ca18bc06697c59f4734d9db0d720ae1e5ce634999c0950a952"} Nov 27 17:35:09 crc kubenswrapper[4792]: I1127 17:35:09.074707 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" exitCode=0 Nov 27 17:35:09 crc kubenswrapper[4792]: I1127 17:35:09.074755 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09"} Nov 27 17:35:09 crc kubenswrapper[4792]: I1127 17:35:09.074808 4792 scope.go:117] "RemoveContainer" containerID="96c8b617d1cd650967466a2e285f319ed4525e9b0567767b82907caf8e1a4e24" Nov 27 17:35:09 crc kubenswrapper[4792]: I1127 17:35:09.075768 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:35:09 crc kubenswrapper[4792]: E1127 17:35:09.076118 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:35:10 crc kubenswrapper[4792]: I1127 17:35:10.087506 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9278b907-83d9-463e-9fd9-41d227ad834d","Type":"ContainerStarted","Data":"c8f35560cdd6781ef0b02adf95881e7308bf044b6b23ad7c752b803d2fcc242e"} Nov 27 17:35:10 crc kubenswrapper[4792]: I1127 17:35:10.959279 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-rr9ql"] Nov 27 17:35:10 crc kubenswrapper[4792]: I1127 17:35:10.970594 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-rr9ql"] Nov 27 17:35:11 crc kubenswrapper[4792]: I1127 17:35:11.045253 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-44cd2"] Nov 27 17:35:11 crc kubenswrapper[4792]: I1127 17:35:11.046857 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-44cd2" Nov 27 17:35:11 crc kubenswrapper[4792]: I1127 17:35:11.060411 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-44cd2"] Nov 27 17:35:11 crc kubenswrapper[4792]: I1127 17:35:11.099757 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9278b907-83d9-463e-9fd9-41d227ad834d","Type":"ContainerStarted","Data":"1ed95b2c1879420ab9a22969e9deed1b7db8f0ea918f2e4eeb9838cbd48318cd"} Nov 27 17:35:11 crc kubenswrapper[4792]: I1127 17:35:11.154940 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh99l\" (UniqueName: \"kubernetes.io/projected/7e297bad-8615-4fcd-a43a-4ef82af97714-kube-api-access-xh99l\") pod \"heat-db-sync-44cd2\" (UID: \"7e297bad-8615-4fcd-a43a-4ef82af97714\") " pod="openstack/heat-db-sync-44cd2" Nov 27 17:35:11 crc kubenswrapper[4792]: I1127 17:35:11.155072 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e297bad-8615-4fcd-a43a-4ef82af97714-config-data\") pod \"heat-db-sync-44cd2\" (UID: \"7e297bad-8615-4fcd-a43a-4ef82af97714\") " pod="openstack/heat-db-sync-44cd2" Nov 27 17:35:11 crc kubenswrapper[4792]: I1127 17:35:11.155131 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e297bad-8615-4fcd-a43a-4ef82af97714-combined-ca-bundle\") pod \"heat-db-sync-44cd2\" (UID: \"7e297bad-8615-4fcd-a43a-4ef82af97714\") " pod="openstack/heat-db-sync-44cd2" Nov 27 17:35:11 crc kubenswrapper[4792]: I1127 17:35:11.257772 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh99l\" (UniqueName: \"kubernetes.io/projected/7e297bad-8615-4fcd-a43a-4ef82af97714-kube-api-access-xh99l\") pod \"heat-db-sync-44cd2\" (UID: \"7e297bad-8615-4fcd-a43a-4ef82af97714\") " pod="openstack/heat-db-sync-44cd2" Nov 27 17:35:11 crc kubenswrapper[4792]: I1127 17:35:11.257929 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e297bad-8615-4fcd-a43a-4ef82af97714-config-data\") pod \"heat-db-sync-44cd2\" (UID: \"7e297bad-8615-4fcd-a43a-4ef82af97714\") " pod="openstack/heat-db-sync-44cd2" Nov 27 17:35:11 crc kubenswrapper[4792]: I1127 17:35:11.257985 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e297bad-8615-4fcd-a43a-4ef82af97714-combined-ca-bundle\") pod \"heat-db-sync-44cd2\" (UID: \"7e297bad-8615-4fcd-a43a-4ef82af97714\") " pod="openstack/heat-db-sync-44cd2" Nov 27 17:35:11 crc kubenswrapper[4792]: I1127 17:35:11.267687 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e297bad-8615-4fcd-a43a-4ef82af97714-combined-ca-bundle\") pod \"heat-db-sync-44cd2\" (UID: \"7e297bad-8615-4fcd-a43a-4ef82af97714\") " pod="openstack/heat-db-sync-44cd2" Nov 27 17:35:11 crc kubenswrapper[4792]: I1127 17:35:11.268207 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e297bad-8615-4fcd-a43a-4ef82af97714-config-data\") pod \"heat-db-sync-44cd2\" (UID: \"7e297bad-8615-4fcd-a43a-4ef82af97714\") " pod="openstack/heat-db-sync-44cd2" Nov 27 17:35:11 crc kubenswrapper[4792]: I1127 17:35:11.282169 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh99l\" (UniqueName: \"kubernetes.io/projected/7e297bad-8615-4fcd-a43a-4ef82af97714-kube-api-access-xh99l\") pod \"heat-db-sync-44cd2\" (UID: \"7e297bad-8615-4fcd-a43a-4ef82af97714\") " pod="openstack/heat-db-sync-44cd2" Nov 27 17:35:11 crc kubenswrapper[4792]: I1127 17:35:11.369460 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-44cd2" Nov 27 17:35:11 crc kubenswrapper[4792]: I1127 17:35:11.972310 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-44cd2"] Nov 27 17:35:11 crc kubenswrapper[4792]: W1127 17:35:11.979781 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e297bad_8615_4fcd_a43a_4ef82af97714.slice/crio-1c5ae616983a171d9a11a5c99f82d59a84ebdc801ca789eb42bb1f9e81dc7530 WatchSource:0}: Error finding container 1c5ae616983a171d9a11a5c99f82d59a84ebdc801ca789eb42bb1f9e81dc7530: Status 404 returned error can't find the container with id 1c5ae616983a171d9a11a5c99f82d59a84ebdc801ca789eb42bb1f9e81dc7530 Nov 27 17:35:12 crc kubenswrapper[4792]: I1127 17:35:12.207933 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-44cd2" event={"ID":"7e297bad-8615-4fcd-a43a-4ef82af97714","Type":"ContainerStarted","Data":"1c5ae616983a171d9a11a5c99f82d59a84ebdc801ca789eb42bb1f9e81dc7530"} Nov 27 17:35:12 crc kubenswrapper[4792]: I1127 17:35:12.702890 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e3f2e74-1077-4a57-9851-1113b4a46729" path="/var/lib/kubelet/pods/1e3f2e74-1077-4a57-9851-1113b4a46729/volumes" Nov 27 17:35:13 crc kubenswrapper[4792]: I1127 17:35:13.023496 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 27 17:35:13 crc kubenswrapper[4792]: I1127 17:35:13.223164 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9278b907-83d9-463e-9fd9-41d227ad834d","Type":"ContainerStarted","Data":"11e80191bc85ac12b5d8a629217576a9b8ff13833c39101587dd9494d1684a39"} Nov 27 17:35:13 crc kubenswrapper[4792]: I1127 17:35:13.223681 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 27 17:35:13 crc kubenswrapper[4792]: I1127 17:35:13.251976 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.207979695 podStartE2EDuration="6.251957025s" podCreationTimestamp="2025-11-27 17:35:07 +0000 UTC" firstStartedPulling="2025-11-27 17:35:07.944804254 +0000 UTC m=+1530.287630572" lastFinishedPulling="2025-11-27 17:35:11.988781594 +0000 UTC m=+1534.331607902" observedRunningTime="2025-11-27 17:35:13.248383906 +0000 UTC m=+1535.591210244" watchObservedRunningTime="2025-11-27 17:35:13.251957025 +0000 UTC m=+1535.594783343" Nov 27 17:35:13 crc kubenswrapper[4792]: I1127 17:35:13.395688 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9hh7f" podUID="7180a433-86ea-495c-9fc8-22583bbabe14" containerName="registry-server" probeResult="failure" output=< Nov 27 17:35:13 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 17:35:13 crc kubenswrapper[4792]: > Nov 27 17:35:13 crc kubenswrapper[4792]: I1127 17:35:13.598269 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 27 17:35:14 crc kubenswrapper[4792]: I1127 17:35:14.110861 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 27 17:35:16 crc kubenswrapper[4792]: I1127 17:35:16.215883 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:35:16 crc kubenswrapper[4792]: I1127 17:35:16.216691 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="ceilometer-central-agent" containerID="cri-o://0dcb4012334df0ca18bc06697c59f4734d9db0d720ae1e5ce634999c0950a952" gracePeriod=30 Nov 27 17:35:16 crc kubenswrapper[4792]: I1127 17:35:16.217204 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="proxy-httpd" containerID="cri-o://11e80191bc85ac12b5d8a629217576a9b8ff13833c39101587dd9494d1684a39" gracePeriod=30 Nov 27 17:35:16 crc kubenswrapper[4792]: I1127 17:35:16.217255 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="sg-core" containerID="cri-o://1ed95b2c1879420ab9a22969e9deed1b7db8f0ea918f2e4eeb9838cbd48318cd" gracePeriod=30 Nov 27 17:35:16 crc kubenswrapper[4792]: I1127 17:35:16.217289 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="ceilometer-notification-agent" containerID="cri-o://c8f35560cdd6781ef0b02adf95881e7308bf044b6b23ad7c752b803d2fcc242e" gracePeriod=30 Nov 27 17:35:17 crc kubenswrapper[4792]: I1127 17:35:17.315327 4792 generic.go:334] "Generic (PLEG): container finished" podID="9278b907-83d9-463e-9fd9-41d227ad834d" containerID="11e80191bc85ac12b5d8a629217576a9b8ff13833c39101587dd9494d1684a39" exitCode=0 Nov 27 17:35:17 crc kubenswrapper[4792]: I1127 17:35:17.315571 4792 generic.go:334] "Generic (PLEG): container finished" podID="9278b907-83d9-463e-9fd9-41d227ad834d" containerID="1ed95b2c1879420ab9a22969e9deed1b7db8f0ea918f2e4eeb9838cbd48318cd" exitCode=2 Nov 27 17:35:17 crc kubenswrapper[4792]: I1127 17:35:17.315582 4792 generic.go:334] "Generic (PLEG): container finished" podID="9278b907-83d9-463e-9fd9-41d227ad834d" containerID="c8f35560cdd6781ef0b02adf95881e7308bf044b6b23ad7c752b803d2fcc242e" exitCode=0 Nov 27 17:35:17 crc kubenswrapper[4792]: I1127 17:35:17.315521 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9278b907-83d9-463e-9fd9-41d227ad834d","Type":"ContainerDied","Data":"11e80191bc85ac12b5d8a629217576a9b8ff13833c39101587dd9494d1684a39"} Nov 27 17:35:17 crc kubenswrapper[4792]: I1127 17:35:17.315618 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9278b907-83d9-463e-9fd9-41d227ad834d","Type":"ContainerDied","Data":"1ed95b2c1879420ab9a22969e9deed1b7db8f0ea918f2e4eeb9838cbd48318cd"} Nov 27 17:35:17 crc kubenswrapper[4792]: I1127 17:35:17.315632 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9278b907-83d9-463e-9fd9-41d227ad834d","Type":"ContainerDied","Data":"c8f35560cdd6781ef0b02adf95881e7308bf044b6b23ad7c752b803d2fcc242e"} Nov 27 17:35:18 crc kubenswrapper[4792]: I1127 17:35:18.393605 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="27d6022e-eea3-41e9-b880-620328dc5d78" containerName="rabbitmq" containerID="cri-o://8659e37a39916a91f8b179786828202562ff0282105e9f9d5ccdfcbc85122bf8" gracePeriod=604795 Nov 27 17:35:19 crc kubenswrapper[4792]: I1127 17:35:19.148468 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="dbbf8d9a-2069-4544-92db-ad5174339775" containerName="rabbitmq" containerID="cri-o://eb19d94a0bd842d00dcd04f1e391501497c7d4035c53f569b620e977505a3609" gracePeriod=604795 Nov 27 17:35:19 crc kubenswrapper[4792]: I1127 17:35:19.688713 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:35:19 crc kubenswrapper[4792]: E1127 17:35:19.688954 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:35:23 crc kubenswrapper[4792]: I1127 17:35:23.376412 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9hh7f" podUID="7180a433-86ea-495c-9fc8-22583bbabe14" containerName="registry-server" probeResult="failure" output=< Nov 27 17:35:23 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 17:35:23 crc kubenswrapper[4792]: > Nov 27 17:35:25 crc kubenswrapper[4792]: I1127 17:35:25.426420 4792 generic.go:334] "Generic (PLEG): container finished" podID="27d6022e-eea3-41e9-b880-620328dc5d78" containerID="8659e37a39916a91f8b179786828202562ff0282105e9f9d5ccdfcbc85122bf8" exitCode=0 Nov 27 17:35:25 crc kubenswrapper[4792]: I1127 17:35:25.426606 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27d6022e-eea3-41e9-b880-620328dc5d78","Type":"ContainerDied","Data":"8659e37a39916a91f8b179786828202562ff0282105e9f9d5ccdfcbc85122bf8"} Nov 27 17:35:25 crc kubenswrapper[4792]: I1127 17:35:25.429313 4792 generic.go:334] "Generic (PLEG): container finished" podID="dbbf8d9a-2069-4544-92db-ad5174339775" containerID="eb19d94a0bd842d00dcd04f1e391501497c7d4035c53f569b620e977505a3609" exitCode=0 Nov 27 17:35:25 crc kubenswrapper[4792]: I1127 17:35:25.429353 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbbf8d9a-2069-4544-92db-ad5174339775","Type":"ContainerDied","Data":"eb19d94a0bd842d00dcd04f1e391501497c7d4035c53f569b620e977505a3609"} Nov 27 17:35:27 crc kubenswrapper[4792]: I1127 17:35:27.284068 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="dbbf8d9a-2069-4544-92db-ad5174339775" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Nov 27 17:35:27 crc kubenswrapper[4792]: I1127 17:35:27.623408 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="27d6022e-eea3-41e9-b880-620328dc5d78" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.652972 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-wxs9b"] Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.655966 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.663099 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.671938 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-wxs9b"] Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.785516 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.788268 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.788448 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.788507 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzg4d\" (UniqueName: \"kubernetes.io/projected/5353a78d-2b26-479b-aff1-9e871769dd58-kube-api-access-bzg4d\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.789115 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-config\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.789353 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-dns-svc\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.789384 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.891108 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.891211 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.891284 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.891326 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzg4d\" (UniqueName: \"kubernetes.io/projected/5353a78d-2b26-479b-aff1-9e871769dd58-kube-api-access-bzg4d\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.891376 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-config\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.891493 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-dns-svc\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.891533 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.892551 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.892599 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.893356 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.893382 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-config\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.893919 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-dns-svc\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.894086 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:29 crc kubenswrapper[4792]: I1127 17:35:29.917004 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzg4d\" (UniqueName: \"kubernetes.io/projected/5353a78d-2b26-479b-aff1-9e871769dd58-kube-api-access-bzg4d\") pod \"dnsmasq-dns-594cb89c79-wxs9b\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:30 crc kubenswrapper[4792]: I1127 17:35:30.021469 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:30 crc kubenswrapper[4792]: I1127 17:35:30.688072 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:35:30 crc kubenswrapper[4792]: E1127 17:35:30.688298 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:35:32 crc kubenswrapper[4792]: E1127 17:35:31.996295 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 27 17:35:32 crc kubenswrapper[4792]: E1127 17:35:31.996696 4792 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 27 17:35:32 crc kubenswrapper[4792]: E1127 17:35:31.996817 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xh99l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-44cd2_openstack(7e297bad-8615-4fcd-a43a-4ef82af97714): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:35:32 crc kubenswrapper[4792]: E1127 17:35:31.998006 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-44cd2" podUID="7e297bad-8615-4fcd-a43a-4ef82af97714" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.078527 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.090091 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.256151 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-plugins-conf\") pod \"dbbf8d9a-2069-4544-92db-ad5174339775\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.256406 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnlnd\" (UniqueName: \"kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-kube-api-access-wnlnd\") pod \"dbbf8d9a-2069-4544-92db-ad5174339775\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.256440 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/27d6022e-eea3-41e9-b880-620328dc5d78-pod-info\") pod \"27d6022e-eea3-41e9-b880-620328dc5d78\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.256489 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-tls\") pod \"27d6022e-eea3-41e9-b880-620328dc5d78\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.256518 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq4sm\" (UniqueName: \"kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-kube-api-access-bq4sm\") pod \"27d6022e-eea3-41e9-b880-620328dc5d78\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.256564 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-confd\") pod \"27d6022e-eea3-41e9-b880-620328dc5d78\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.256669 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-plugins\") pod \"dbbf8d9a-2069-4544-92db-ad5174339775\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.256698 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/27d6022e-eea3-41e9-b880-620328dc5d78-erlang-cookie-secret\") pod \"27d6022e-eea3-41e9-b880-620328dc5d78\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.256737 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-tls\") pod \"dbbf8d9a-2069-4544-92db-ad5174339775\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.256773 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-erlang-cookie\") pod \"dbbf8d9a-2069-4544-92db-ad5174339775\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.257543 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"dbbf8d9a-2069-4544-92db-ad5174339775\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.257597 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbbf8d9a-2069-4544-92db-ad5174339775-erlang-cookie-secret\") pod \"dbbf8d9a-2069-4544-92db-ad5174339775\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.257628 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"27d6022e-eea3-41e9-b880-620328dc5d78\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.257687 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-server-conf\") pod \"27d6022e-eea3-41e9-b880-620328dc5d78\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.257742 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-config-data\") pod \"dbbf8d9a-2069-4544-92db-ad5174339775\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.257863 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-server-conf\") pod \"dbbf8d9a-2069-4544-92db-ad5174339775\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.257890 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbbf8d9a-2069-4544-92db-ad5174339775-pod-info\") pod \"dbbf8d9a-2069-4544-92db-ad5174339775\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.257933 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-confd\") pod \"dbbf8d9a-2069-4544-92db-ad5174339775\" (UID: \"dbbf8d9a-2069-4544-92db-ad5174339775\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.257963 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-config-data\") pod \"27d6022e-eea3-41e9-b880-620328dc5d78\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.257990 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-plugins\") pod \"27d6022e-eea3-41e9-b880-620328dc5d78\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.258027 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-plugins-conf\") pod \"27d6022e-eea3-41e9-b880-620328dc5d78\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.258050 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-erlang-cookie\") pod \"27d6022e-eea3-41e9-b880-620328dc5d78\" (UID: \"27d6022e-eea3-41e9-b880-620328dc5d78\") " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.259404 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "dbbf8d9a-2069-4544-92db-ad5174339775" (UID: "dbbf8d9a-2069-4544-92db-ad5174339775"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.260017 4792 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.261473 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "dbbf8d9a-2069-4544-92db-ad5174339775" (UID: "dbbf8d9a-2069-4544-92db-ad5174339775"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.269749 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "27d6022e-eea3-41e9-b880-620328dc5d78" (UID: "27d6022e-eea3-41e9-b880-620328dc5d78"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.270084 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "dbbf8d9a-2069-4544-92db-ad5174339775" (UID: "dbbf8d9a-2069-4544-92db-ad5174339775"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.271508 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "27d6022e-eea3-41e9-b880-620328dc5d78" (UID: "27d6022e-eea3-41e9-b880-620328dc5d78"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.271658 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "27d6022e-eea3-41e9-b880-620328dc5d78" (UID: "27d6022e-eea3-41e9-b880-620328dc5d78"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.277956 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbbf8d9a-2069-4544-92db-ad5174339775-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "dbbf8d9a-2069-4544-92db-ad5174339775" (UID: "dbbf8d9a-2069-4544-92db-ad5174339775"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.278320 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "27d6022e-eea3-41e9-b880-620328dc5d78" (UID: "27d6022e-eea3-41e9-b880-620328dc5d78"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.280573 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "27d6022e-eea3-41e9-b880-620328dc5d78" (UID: "27d6022e-eea3-41e9-b880-620328dc5d78"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.281930 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/27d6022e-eea3-41e9-b880-620328dc5d78-pod-info" (OuterVolumeSpecName: "pod-info") pod "27d6022e-eea3-41e9-b880-620328dc5d78" (UID: "27d6022e-eea3-41e9-b880-620328dc5d78"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.283805 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-kube-api-access-wnlnd" (OuterVolumeSpecName: "kube-api-access-wnlnd") pod "dbbf8d9a-2069-4544-92db-ad5174339775" (UID: "dbbf8d9a-2069-4544-92db-ad5174339775"). InnerVolumeSpecName "kube-api-access-wnlnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.287967 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-kube-api-access-bq4sm" (OuterVolumeSpecName: "kube-api-access-bq4sm") pod "27d6022e-eea3-41e9-b880-620328dc5d78" (UID: "27d6022e-eea3-41e9-b880-620328dc5d78"). InnerVolumeSpecName "kube-api-access-bq4sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.290077 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "dbbf8d9a-2069-4544-92db-ad5174339775" (UID: "dbbf8d9a-2069-4544-92db-ad5174339775"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.290622 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "dbbf8d9a-2069-4544-92db-ad5174339775" (UID: "dbbf8d9a-2069-4544-92db-ad5174339775"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.295874 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27d6022e-eea3-41e9-b880-620328dc5d78-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "27d6022e-eea3-41e9-b880-620328dc5d78" (UID: "27d6022e-eea3-41e9-b880-620328dc5d78"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.296686 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/dbbf8d9a-2069-4544-92db-ad5174339775-pod-info" (OuterVolumeSpecName: "pod-info") pod "dbbf8d9a-2069-4544-92db-ad5174339775" (UID: "dbbf8d9a-2069-4544-92db-ad5174339775"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.320202 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-config-data" (OuterVolumeSpecName: "config-data") pod "27d6022e-eea3-41e9-b880-620328dc5d78" (UID: "27d6022e-eea3-41e9-b880-620328dc5d78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363404 4792 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/27d6022e-eea3-41e9-b880-620328dc5d78-pod-info\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363439 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363451 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq4sm\" (UniqueName: \"kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-kube-api-access-bq4sm\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363462 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363473 4792 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/27d6022e-eea3-41e9-b880-620328dc5d78-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363482 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363493 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363529 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363540 4792 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbbf8d9a-2069-4544-92db-ad5174339775-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363554 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363562 4792 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbbf8d9a-2069-4544-92db-ad5174339775-pod-info\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363570 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363578 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363588 4792 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363595 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.363603 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnlnd\" (UniqueName: \"kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-kube-api-access-wnlnd\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.377507 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-server-conf" (OuterVolumeSpecName: "server-conf") pod "dbbf8d9a-2069-4544-92db-ad5174339775" (UID: "dbbf8d9a-2069-4544-92db-ad5174339775"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.417084 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-config-data" (OuterVolumeSpecName: "config-data") pod "dbbf8d9a-2069-4544-92db-ad5174339775" (UID: "dbbf8d9a-2069-4544-92db-ad5174339775"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.433074 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-server-conf" (OuterVolumeSpecName: "server-conf") pod "27d6022e-eea3-41e9-b880-620328dc5d78" (UID: "27d6022e-eea3-41e9-b880-620328dc5d78"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.446951 4792 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.468725 4792 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.470611 4792 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.470633 4792 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.470657 4792 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/27d6022e-eea3-41e9-b880-620328dc5d78-server-conf\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.470667 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.470676 4792 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbbf8d9a-2069-4544-92db-ad5174339775-server-conf\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: W1127 17:35:32.520894 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5353a78d_2b26_479b_aff1_9e871769dd58.slice/crio-9c0919bf753d0d4f6fcbc0bb66da6c88f15a13fb61a6c3b5931de96b4a8418dd WatchSource:0}: Error finding container 9c0919bf753d0d4f6fcbc0bb66da6c88f15a13fb61a6c3b5931de96b4a8418dd: Status 404 returned error can't find the container with id 9c0919bf753d0d4f6fcbc0bb66da6c88f15a13fb61a6c3b5931de96b4a8418dd Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.535837 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbbf8d9a-2069-4544-92db-ad5174339775","Type":"ContainerDied","Data":"1c9ebfea36917607572bf31592276546b601a0cd133b9c06868b8893d290788c"} Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.535887 4792 scope.go:117] "RemoveContainer" containerID="eb19d94a0bd842d00dcd04f1e391501497c7d4035c53f569b620e977505a3609" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.536026 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.543853 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-wxs9b"] Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.572548 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.572667 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27d6022e-eea3-41e9-b880-620328dc5d78","Type":"ContainerDied","Data":"07eba9bf2ea9734a419856756e27fc2df2175b506bc259156f6c2a403e3af1c6"} Nov 27 17:35:32 crc kubenswrapper[4792]: E1127 17:35:32.573887 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-44cd2" podUID="7e297bad-8615-4fcd-a43a-4ef82af97714" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.598803 4792 scope.go:117] "RemoveContainer" containerID="7b0b51a3568b0257fd59b44b84fcb2226603c94271cc99981af94095c140c28e" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.620820 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "dbbf8d9a-2069-4544-92db-ad5174339775" (UID: "dbbf8d9a-2069-4544-92db-ad5174339775"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.648028 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "27d6022e-eea3-41e9-b880-620328dc5d78" (UID: "27d6022e-eea3-41e9-b880-620328dc5d78"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.704110 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbbf8d9a-2069-4544-92db-ad5174339775-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.704467 4792 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/27d6022e-eea3-41e9-b880-620328dc5d78-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.711092 4792 scope.go:117] "RemoveContainer" containerID="8659e37a39916a91f8b179786828202562ff0282105e9f9d5ccdfcbc85122bf8" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.752492 4792 scope.go:117] "RemoveContainer" containerID="8fffed7f25cc826dced9350fe59c2b2b5794322a9e9024a84355b647529d07bd" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.872837 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.887187 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.920413 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 27 17:35:32 crc kubenswrapper[4792]: E1127 17:35:32.920984 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d6022e-eea3-41e9-b880-620328dc5d78" containerName="setup-container" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.921008 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d6022e-eea3-41e9-b880-620328dc5d78" containerName="setup-container" Nov 27 17:35:32 crc kubenswrapper[4792]: E1127 17:35:32.921031 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbbf8d9a-2069-4544-92db-ad5174339775" containerName="setup-container" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.921051 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbbf8d9a-2069-4544-92db-ad5174339775" containerName="setup-container" Nov 27 17:35:32 crc kubenswrapper[4792]: E1127 17:35:32.921067 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d6022e-eea3-41e9-b880-620328dc5d78" containerName="rabbitmq" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.921075 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d6022e-eea3-41e9-b880-620328dc5d78" containerName="rabbitmq" Nov 27 17:35:32 crc kubenswrapper[4792]: E1127 17:35:32.921100 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbbf8d9a-2069-4544-92db-ad5174339775" containerName="rabbitmq" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.921107 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbbf8d9a-2069-4544-92db-ad5174339775" containerName="rabbitmq" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.921414 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="27d6022e-eea3-41e9-b880-620328dc5d78" containerName="rabbitmq" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.921442 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbbf8d9a-2069-4544-92db-ad5174339775" containerName="rabbitmq" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.923091 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.928106 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.928455 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.928587 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.928619 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.928765 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.931049 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tdpx2" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.931312 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.937769 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.956232 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 27 17:35:32 crc kubenswrapper[4792]: I1127 17:35:32.967400 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.026695 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.030374 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.042073 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.042073 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.042720 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-52cs2" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.042872 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.043009 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.043175 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.043364 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.083725 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.123025 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73468e89-af69-44aa-bc4d-66c7e34a8dff-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.123576 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46qgx\" (UniqueName: \"kubernetes.io/projected/73468e89-af69-44aa-bc4d-66c7e34a8dff-kube-api-access-46qgx\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.124149 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73468e89-af69-44aa-bc4d-66c7e34a8dff-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.124217 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73468e89-af69-44aa-bc4d-66c7e34a8dff-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.125803 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73468e89-af69-44aa-bc4d-66c7e34a8dff-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.126255 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.126846 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73468e89-af69-44aa-bc4d-66c7e34a8dff-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.126960 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73468e89-af69-44aa-bc4d-66c7e34a8dff-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.127386 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73468e89-af69-44aa-bc4d-66c7e34a8dff-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.127837 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73468e89-af69-44aa-bc4d-66c7e34a8dff-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.127867 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73468e89-af69-44aa-bc4d-66c7e34a8dff-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.233958 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d2993a9-7994-4249-bfd1-acc7b734eb16-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.234242 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d2993a9-7994-4249-bfd1-acc7b734eb16-config-data\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.234320 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73468e89-af69-44aa-bc4d-66c7e34a8dff-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.234390 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73468e89-af69-44aa-bc4d-66c7e34a8dff-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.234621 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73468e89-af69-44aa-bc4d-66c7e34a8dff-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.234697 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73468e89-af69-44aa-bc4d-66c7e34a8dff-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.234753 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d2993a9-7994-4249-bfd1-acc7b734eb16-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.234805 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73468e89-af69-44aa-bc4d-66c7e34a8dff-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.234896 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d2993a9-7994-4249-bfd1-acc7b734eb16-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.235016 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d2993a9-7994-4249-bfd1-acc7b734eb16-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.235080 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46qgx\" (UniqueName: \"kubernetes.io/projected/73468e89-af69-44aa-bc4d-66c7e34a8dff-kube-api-access-46qgx\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.235120 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73468e89-af69-44aa-bc4d-66c7e34a8dff-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.235214 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.235266 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73468e89-af69-44aa-bc4d-66c7e34a8dff-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.235330 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d2993a9-7994-4249-bfd1-acc7b734eb16-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.235402 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pkzb\" (UniqueName: \"kubernetes.io/projected/6d2993a9-7994-4249-bfd1-acc7b734eb16-kube-api-access-2pkzb\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.235453 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73468e89-af69-44aa-bc4d-66c7e34a8dff-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.235504 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d2993a9-7994-4249-bfd1-acc7b734eb16-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.235547 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d2993a9-7994-4249-bfd1-acc7b734eb16-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.235587 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.235672 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73468e89-af69-44aa-bc4d-66c7e34a8dff-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.235715 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d2993a9-7994-4249-bfd1-acc7b734eb16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.236754 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/73468e89-af69-44aa-bc4d-66c7e34a8dff-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.240083 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/73468e89-af69-44aa-bc4d-66c7e34a8dff-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.240757 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/73468e89-af69-44aa-bc4d-66c7e34a8dff-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.240996 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/73468e89-af69-44aa-bc4d-66c7e34a8dff-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.241201 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.243346 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/73468e89-af69-44aa-bc4d-66c7e34a8dff-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.243738 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/73468e89-af69-44aa-bc4d-66c7e34a8dff-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.247287 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/73468e89-af69-44aa-bc4d-66c7e34a8dff-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.249035 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/73468e89-af69-44aa-bc4d-66c7e34a8dff-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.250609 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/73468e89-af69-44aa-bc4d-66c7e34a8dff-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.291055 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46qgx\" (UniqueName: \"kubernetes.io/projected/73468e89-af69-44aa-bc4d-66c7e34a8dff-kube-api-access-46qgx\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.338171 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d2993a9-7994-4249-bfd1-acc7b734eb16-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.338245 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d2993a9-7994-4249-bfd1-acc7b734eb16-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.338284 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.338314 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d2993a9-7994-4249-bfd1-acc7b734eb16-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.338351 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pkzb\" (UniqueName: \"kubernetes.io/projected/6d2993a9-7994-4249-bfd1-acc7b734eb16-kube-api-access-2pkzb\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.338386 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d2993a9-7994-4249-bfd1-acc7b734eb16-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.338408 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d2993a9-7994-4249-bfd1-acc7b734eb16-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.338449 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d2993a9-7994-4249-bfd1-acc7b734eb16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.338470 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d2993a9-7994-4249-bfd1-acc7b734eb16-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.338494 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d2993a9-7994-4249-bfd1-acc7b734eb16-config-data\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.338549 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d2993a9-7994-4249-bfd1-acc7b734eb16-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.339723 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6d2993a9-7994-4249-bfd1-acc7b734eb16-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.342228 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.343365 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6d2993a9-7994-4249-bfd1-acc7b734eb16-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.343664 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6d2993a9-7994-4249-bfd1-acc7b734eb16-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.345123 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"73468e89-af69-44aa-bc4d-66c7e34a8dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.345510 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6d2993a9-7994-4249-bfd1-acc7b734eb16-config-data\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.345770 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6d2993a9-7994-4249-bfd1-acc7b734eb16-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.348849 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6d2993a9-7994-4249-bfd1-acc7b734eb16-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.349237 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6d2993a9-7994-4249-bfd1-acc7b734eb16-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.351078 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6d2993a9-7994-4249-bfd1-acc7b734eb16-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.353142 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6d2993a9-7994-4249-bfd1-acc7b734eb16-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.364263 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pkzb\" (UniqueName: \"kubernetes.io/projected/6d2993a9-7994-4249-bfd1-acc7b734eb16-kube-api-access-2pkzb\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.382115 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9hh7f" podUID="7180a433-86ea-495c-9fc8-22583bbabe14" containerName="registry-server" probeResult="failure" output=< Nov 27 17:35:33 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 17:35:33 crc kubenswrapper[4792]: > Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.390021 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"6d2993a9-7994-4249-bfd1-acc7b734eb16\") " pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.407449 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.587894 4792 generic.go:334] "Generic (PLEG): container finished" podID="5353a78d-2b26-479b-aff1-9e871769dd58" containerID="0126b7798ed5f7ad8967196fe4be3a82462651055067da627081dbb7c7d67510" exitCode=0 Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.588283 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" event={"ID":"5353a78d-2b26-479b-aff1-9e871769dd58","Type":"ContainerDied","Data":"0126b7798ed5f7ad8967196fe4be3a82462651055067da627081dbb7c7d67510"} Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.588343 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" event={"ID":"5353a78d-2b26-479b-aff1-9e871769dd58","Type":"ContainerStarted","Data":"9c0919bf753d0d4f6fcbc0bb66da6c88f15a13fb61a6c3b5931de96b4a8418dd"} Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.600865 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:35:33 crc kubenswrapper[4792]: I1127 17:35:33.943949 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 27 17:35:33 crc kubenswrapper[4792]: W1127 17:35:33.956012 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d2993a9_7994_4249_bfd1_acc7b734eb16.slice/crio-bdfa4000b1cf704681b333a901a98e07939bff430124dd26a975062fb1a17f4e WatchSource:0}: Error finding container bdfa4000b1cf704681b333a901a98e07939bff430124dd26a975062fb1a17f4e: Status 404 returned error can't find the container with id bdfa4000b1cf704681b333a901a98e07939bff430124dd26a975062fb1a17f4e Nov 27 17:35:34 crc kubenswrapper[4792]: I1127 17:35:34.202783 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 27 17:35:34 crc kubenswrapper[4792]: W1127 17:35:34.202863 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73468e89_af69_44aa_bc4d_66c7e34a8dff.slice/crio-83dcf135226e01957ba38bf7b67a6eb3a5561f80e48a51b6256d7691abe89de5 WatchSource:0}: Error finding container 83dcf135226e01957ba38bf7b67a6eb3a5561f80e48a51b6256d7691abe89de5: Status 404 returned error can't find the container with id 83dcf135226e01957ba38bf7b67a6eb3a5561f80e48a51b6256d7691abe89de5 Nov 27 17:35:34 crc kubenswrapper[4792]: I1127 17:35:34.605224 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73468e89-af69-44aa-bc4d-66c7e34a8dff","Type":"ContainerStarted","Data":"83dcf135226e01957ba38bf7b67a6eb3a5561f80e48a51b6256d7691abe89de5"} Nov 27 17:35:34 crc kubenswrapper[4792]: I1127 17:35:34.606343 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d2993a9-7994-4249-bfd1-acc7b734eb16","Type":"ContainerStarted","Data":"bdfa4000b1cf704681b333a901a98e07939bff430124dd26a975062fb1a17f4e"} Nov 27 17:35:34 crc kubenswrapper[4792]: I1127 17:35:34.608341 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" event={"ID":"5353a78d-2b26-479b-aff1-9e871769dd58","Type":"ContainerStarted","Data":"b8c95a3a721ec3b5850978f8f9543fac94d759e071d36ffb77084bdafb02790d"} Nov 27 17:35:34 crc kubenswrapper[4792]: I1127 17:35:34.608543 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:34 crc kubenswrapper[4792]: I1127 17:35:34.648421 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" podStartSLOduration=5.648399118 podStartE2EDuration="5.648399118s" podCreationTimestamp="2025-11-27 17:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:35:34.633334024 +0000 UTC m=+1556.976160342" watchObservedRunningTime="2025-11-27 17:35:34.648399118 +0000 UTC m=+1556.991225446" Nov 27 17:35:34 crc kubenswrapper[4792]: I1127 17:35:34.703947 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27d6022e-eea3-41e9-b880-620328dc5d78" path="/var/lib/kubelet/pods/27d6022e-eea3-41e9-b880-620328dc5d78/volumes" Nov 27 17:35:34 crc kubenswrapper[4792]: I1127 17:35:34.706276 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbbf8d9a-2069-4544-92db-ad5174339775" path="/var/lib/kubelet/pods/dbbf8d9a-2069-4544-92db-ad5174339775/volumes" Nov 27 17:35:36 crc kubenswrapper[4792]: I1127 17:35:36.632792 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73468e89-af69-44aa-bc4d-66c7e34a8dff","Type":"ContainerStarted","Data":"afffe003ab045eadf1d37a5d0ee3b591bef17e3a1d4c2b7b207bab815c4b39fc"} Nov 27 17:35:36 crc kubenswrapper[4792]: I1127 17:35:36.634628 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d2993a9-7994-4249-bfd1-acc7b734eb16","Type":"ContainerStarted","Data":"2e1ea603b0791010fb36b4c0919f57d593550478de99a220797b0955e12169b9"} Nov 27 17:35:37 crc kubenswrapper[4792]: I1127 17:35:37.460445 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.1.4:3000/\": dial tcp 10.217.1.4:3000: connect: connection refused" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.022901 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.097696 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-227hq"] Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.097989 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" podUID="c7d684b6-6b54-4fea-86da-6b6266a2c2eb" containerName="dnsmasq-dns" containerID="cri-o://8ec4922a35967ec6f574cb50e8fcbc84edc9a606eba95564c2cd9325610636aa" gracePeriod=10 Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.288673 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-76jbs"] Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.291456 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.302415 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-76jbs"] Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.427638 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.428199 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-config\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.428244 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.428284 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.428373 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.428702 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6prt\" (UniqueName: \"kubernetes.io/projected/ec33e14b-5586-4b5e-a807-396841a63250-kube-api-access-d6prt\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.428826 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.531184 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6prt\" (UniqueName: \"kubernetes.io/projected/ec33e14b-5586-4b5e-a807-396841a63250-kube-api-access-d6prt\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.531244 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.531315 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.531353 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-config\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.531382 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.531410 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.532294 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.532331 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-config\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.532348 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.532414 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.532415 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.532666 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.532839 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec33e14b-5586-4b5e-a807-396841a63250-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.565750 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6prt\" (UniqueName: \"kubernetes.io/projected/ec33e14b-5586-4b5e-a807-396841a63250-kube-api-access-d6prt\") pod \"dnsmasq-dns-5596c69fcc-76jbs\" (UID: \"ec33e14b-5586-4b5e-a807-396841a63250\") " pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.689042 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.709982 4792 generic.go:334] "Generic (PLEG): container finished" podID="c7d684b6-6b54-4fea-86da-6b6266a2c2eb" containerID="8ec4922a35967ec6f574cb50e8fcbc84edc9a606eba95564c2cd9325610636aa" exitCode=0 Nov 27 17:35:40 crc kubenswrapper[4792]: I1127 17:35:40.717985 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" event={"ID":"c7d684b6-6b54-4fea-86da-6b6266a2c2eb","Type":"ContainerDied","Data":"8ec4922a35967ec6f574cb50e8fcbc84edc9a606eba95564c2cd9325610636aa"} Nov 27 17:35:41 crc kubenswrapper[4792]: W1127 17:35:41.299095 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec33e14b_5586_4b5e_a807_396841a63250.slice/crio-60b86b47e4b6084187a2c67999e11d3f17889c828254f45cb02afa06875438ab WatchSource:0}: Error finding container 60b86b47e4b6084187a2c67999e11d3f17889c828254f45cb02afa06875438ab: Status 404 returned error can't find the container with id 60b86b47e4b6084187a2c67999e11d3f17889c828254f45cb02afa06875438ab Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.331468 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-76jbs"] Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.550482 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.666746 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-ovsdbserver-sb\") pod \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.667275 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-dns-svc\") pod \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.667328 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-ovsdbserver-nb\") pod \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.667359 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlxct\" (UniqueName: \"kubernetes.io/projected/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-kube-api-access-wlxct\") pod \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.667410 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-dns-swift-storage-0\") pod \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.667458 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-config\") pod \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\" (UID: \"c7d684b6-6b54-4fea-86da-6b6266a2c2eb\") " Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.671761 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-kube-api-access-wlxct" (OuterVolumeSpecName: "kube-api-access-wlxct") pod "c7d684b6-6b54-4fea-86da-6b6266a2c2eb" (UID: "c7d684b6-6b54-4fea-86da-6b6266a2c2eb"). InnerVolumeSpecName "kube-api-access-wlxct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.748357 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" event={"ID":"c7d684b6-6b54-4fea-86da-6b6266a2c2eb","Type":"ContainerDied","Data":"1af6b3ac301a94864a01ea1fbf33d0203c939bf2ccf467b1a1ffc0be49075700"} Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.748451 4792 scope.go:117] "RemoveContainer" containerID="8ec4922a35967ec6f574cb50e8fcbc84edc9a606eba95564c2cd9325610636aa" Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.748682 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-227hq" Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.752068 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" event={"ID":"ec33e14b-5586-4b5e-a807-396841a63250","Type":"ContainerStarted","Data":"60b86b47e4b6084187a2c67999e11d3f17889c828254f45cb02afa06875438ab"} Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.770954 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlxct\" (UniqueName: \"kubernetes.io/projected/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-kube-api-access-wlxct\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.786057 4792 scope.go:117] "RemoveContainer" containerID="6d85a66371eb30fa40a2c02ef5a53bfa0fab3954daef3d3c6af22570ddce5af1" Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.808269 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c7d684b6-6b54-4fea-86da-6b6266a2c2eb" (UID: "c7d684b6-6b54-4fea-86da-6b6266a2c2eb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.816410 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c7d684b6-6b54-4fea-86da-6b6266a2c2eb" (UID: "c7d684b6-6b54-4fea-86da-6b6266a2c2eb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.820255 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-config" (OuterVolumeSpecName: "config") pod "c7d684b6-6b54-4fea-86da-6b6266a2c2eb" (UID: "c7d684b6-6b54-4fea-86da-6b6266a2c2eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.823297 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7d684b6-6b54-4fea-86da-6b6266a2c2eb" (UID: "c7d684b6-6b54-4fea-86da-6b6266a2c2eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.848805 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c7d684b6-6b54-4fea-86da-6b6266a2c2eb" (UID: "c7d684b6-6b54-4fea-86da-6b6266a2c2eb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.873746 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.873774 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.873784 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.873794 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:41 crc kubenswrapper[4792]: I1127 17:35:41.873806 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7d684b6-6b54-4fea-86da-6b6266a2c2eb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:42 crc kubenswrapper[4792]: I1127 17:35:42.087550 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-227hq"] Nov 27 17:35:42 crc kubenswrapper[4792]: I1127 17:35:42.098786 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-227hq"] Nov 27 17:35:42 crc kubenswrapper[4792]: I1127 17:35:42.368211 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:35:42 crc kubenswrapper[4792]: I1127 17:35:42.427416 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:35:42 crc kubenswrapper[4792]: I1127 17:35:42.687715 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:35:42 crc kubenswrapper[4792]: E1127 17:35:42.687965 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:35:42 crc kubenswrapper[4792]: I1127 17:35:42.704839 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7d684b6-6b54-4fea-86da-6b6266a2c2eb" path="/var/lib/kubelet/pods/c7d684b6-6b54-4fea-86da-6b6266a2c2eb/volumes" Nov 27 17:35:42 crc kubenswrapper[4792]: I1127 17:35:42.763956 4792 generic.go:334] "Generic (PLEG): container finished" podID="ec33e14b-5586-4b5e-a807-396841a63250" containerID="feacef39113111bcbca082c337e8795c2da0e04ae43230f477756e6877d2516e" exitCode=0 Nov 27 17:35:42 crc kubenswrapper[4792]: I1127 17:35:42.763998 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" event={"ID":"ec33e14b-5586-4b5e-a807-396841a63250","Type":"ContainerDied","Data":"feacef39113111bcbca082c337e8795c2da0e04ae43230f477756e6877d2516e"} Nov 27 17:35:43 crc kubenswrapper[4792]: I1127 17:35:43.225967 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9hh7f"] Nov 27 17:35:43 crc kubenswrapper[4792]: I1127 17:35:43.780187 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9hh7f" podUID="7180a433-86ea-495c-9fc8-22583bbabe14" containerName="registry-server" containerID="cri-o://2bb60246e978fad2e7cec5089a4422168fd90114e251018d4a6117100fed7b5e" gracePeriod=2 Nov 27 17:35:43 crc kubenswrapper[4792]: I1127 17:35:43.780181 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" event={"ID":"ec33e14b-5586-4b5e-a807-396841a63250","Type":"ContainerStarted","Data":"61c467f157917b2725352d96f856d296e02f84625761a8b0c2e58bc8b97aedea"} Nov 27 17:35:43 crc kubenswrapper[4792]: I1127 17:35:43.809403 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" podStartSLOduration=3.809378514 podStartE2EDuration="3.809378514s" podCreationTimestamp="2025-11-27 17:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:35:43.805029736 +0000 UTC m=+1566.147856064" watchObservedRunningTime="2025-11-27 17:35:43.809378514 +0000 UTC m=+1566.152204832" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.406704 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.553086 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7180a433-86ea-495c-9fc8-22583bbabe14-catalog-content\") pod \"7180a433-86ea-495c-9fc8-22583bbabe14\" (UID: \"7180a433-86ea-495c-9fc8-22583bbabe14\") " Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.553581 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frm2f\" (UniqueName: \"kubernetes.io/projected/7180a433-86ea-495c-9fc8-22583bbabe14-kube-api-access-frm2f\") pod \"7180a433-86ea-495c-9fc8-22583bbabe14\" (UID: \"7180a433-86ea-495c-9fc8-22583bbabe14\") " Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.553771 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7180a433-86ea-495c-9fc8-22583bbabe14-utilities\") pod \"7180a433-86ea-495c-9fc8-22583bbabe14\" (UID: \"7180a433-86ea-495c-9fc8-22583bbabe14\") " Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.554120 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7180a433-86ea-495c-9fc8-22583bbabe14-utilities" (OuterVolumeSpecName: "utilities") pod "7180a433-86ea-495c-9fc8-22583bbabe14" (UID: "7180a433-86ea-495c-9fc8-22583bbabe14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.554809 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7180a433-86ea-495c-9fc8-22583bbabe14-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.560817 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7180a433-86ea-495c-9fc8-22583bbabe14-kube-api-access-frm2f" (OuterVolumeSpecName: "kube-api-access-frm2f") pod "7180a433-86ea-495c-9fc8-22583bbabe14" (UID: "7180a433-86ea-495c-9fc8-22583bbabe14"). InnerVolumeSpecName "kube-api-access-frm2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.661242 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frm2f\" (UniqueName: \"kubernetes.io/projected/7180a433-86ea-495c-9fc8-22583bbabe14-kube-api-access-frm2f\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.717758 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7180a433-86ea-495c-9fc8-22583bbabe14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7180a433-86ea-495c-9fc8-22583bbabe14" (UID: "7180a433-86ea-495c-9fc8-22583bbabe14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.763772 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7180a433-86ea-495c-9fc8-22583bbabe14-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.796076 4792 generic.go:334] "Generic (PLEG): container finished" podID="7180a433-86ea-495c-9fc8-22583bbabe14" containerID="2bb60246e978fad2e7cec5089a4422168fd90114e251018d4a6117100fed7b5e" exitCode=0 Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.796157 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hh7f" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.796209 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hh7f" event={"ID":"7180a433-86ea-495c-9fc8-22583bbabe14","Type":"ContainerDied","Data":"2bb60246e978fad2e7cec5089a4422168fd90114e251018d4a6117100fed7b5e"} Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.796273 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hh7f" event={"ID":"7180a433-86ea-495c-9fc8-22583bbabe14","Type":"ContainerDied","Data":"3f7dd1d0fd9c1383a468906e708330e7187ddfb3974f9783128ae2197ce4d0e1"} Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.796327 4792 scope.go:117] "RemoveContainer" containerID="2bb60246e978fad2e7cec5089a4422168fd90114e251018d4a6117100fed7b5e" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.797213 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.819978 4792 scope.go:117] "RemoveContainer" containerID="5c6ef065b9a83ca6048d65bb37a404e35e55d38760af42d3202fd7cb41ca0cd6" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.854833 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9hh7f"] Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.854968 4792 scope.go:117] "RemoveContainer" containerID="30edc5599d69792c9d1f9bc021b1d4b7004e996fca31862f3dedd97e4c0a1135" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.865491 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9hh7f"] Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.921902 4792 scope.go:117] "RemoveContainer" containerID="2bb60246e978fad2e7cec5089a4422168fd90114e251018d4a6117100fed7b5e" Nov 27 17:35:44 crc kubenswrapper[4792]: E1127 17:35:44.922330 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bb60246e978fad2e7cec5089a4422168fd90114e251018d4a6117100fed7b5e\": container with ID starting with 2bb60246e978fad2e7cec5089a4422168fd90114e251018d4a6117100fed7b5e not found: ID does not exist" containerID="2bb60246e978fad2e7cec5089a4422168fd90114e251018d4a6117100fed7b5e" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.922385 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb60246e978fad2e7cec5089a4422168fd90114e251018d4a6117100fed7b5e"} err="failed to get container status \"2bb60246e978fad2e7cec5089a4422168fd90114e251018d4a6117100fed7b5e\": rpc error: code = NotFound desc = could not find container \"2bb60246e978fad2e7cec5089a4422168fd90114e251018d4a6117100fed7b5e\": container with ID starting with 2bb60246e978fad2e7cec5089a4422168fd90114e251018d4a6117100fed7b5e not found: ID does not exist" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.922425 4792 scope.go:117] "RemoveContainer" containerID="5c6ef065b9a83ca6048d65bb37a404e35e55d38760af42d3202fd7cb41ca0cd6" Nov 27 17:35:44 crc kubenswrapper[4792]: E1127 17:35:44.922905 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c6ef065b9a83ca6048d65bb37a404e35e55d38760af42d3202fd7cb41ca0cd6\": container with ID starting with 5c6ef065b9a83ca6048d65bb37a404e35e55d38760af42d3202fd7cb41ca0cd6 not found: ID does not exist" containerID="5c6ef065b9a83ca6048d65bb37a404e35e55d38760af42d3202fd7cb41ca0cd6" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.922939 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6ef065b9a83ca6048d65bb37a404e35e55d38760af42d3202fd7cb41ca0cd6"} err="failed to get container status \"5c6ef065b9a83ca6048d65bb37a404e35e55d38760af42d3202fd7cb41ca0cd6\": rpc error: code = NotFound desc = could not find container \"5c6ef065b9a83ca6048d65bb37a404e35e55d38760af42d3202fd7cb41ca0cd6\": container with ID starting with 5c6ef065b9a83ca6048d65bb37a404e35e55d38760af42d3202fd7cb41ca0cd6 not found: ID does not exist" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.922959 4792 scope.go:117] "RemoveContainer" containerID="30edc5599d69792c9d1f9bc021b1d4b7004e996fca31862f3dedd97e4c0a1135" Nov 27 17:35:44 crc kubenswrapper[4792]: E1127 17:35:44.923340 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30edc5599d69792c9d1f9bc021b1d4b7004e996fca31862f3dedd97e4c0a1135\": container with ID starting with 30edc5599d69792c9d1f9bc021b1d4b7004e996fca31862f3dedd97e4c0a1135 not found: ID does not exist" containerID="30edc5599d69792c9d1f9bc021b1d4b7004e996fca31862f3dedd97e4c0a1135" Nov 27 17:35:44 crc kubenswrapper[4792]: I1127 17:35:44.923366 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30edc5599d69792c9d1f9bc021b1d4b7004e996fca31862f3dedd97e4c0a1135"} err="failed to get container status \"30edc5599d69792c9d1f9bc021b1d4b7004e996fca31862f3dedd97e4c0a1135\": rpc error: code = NotFound desc = could not find container \"30edc5599d69792c9d1f9bc021b1d4b7004e996fca31862f3dedd97e4c0a1135\": container with ID starting with 30edc5599d69792c9d1f9bc021b1d4b7004e996fca31862f3dedd97e4c0a1135 not found: ID does not exist" Nov 27 17:35:46 crc kubenswrapper[4792]: I1127 17:35:46.701368 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7180a433-86ea-495c-9fc8-22583bbabe14" path="/var/lib/kubelet/pods/7180a433-86ea-495c-9fc8-22583bbabe14/volumes" Nov 27 17:35:46 crc kubenswrapper[4792]: I1127 17:35:46.823538 4792 generic.go:334] "Generic (PLEG): container finished" podID="9278b907-83d9-463e-9fd9-41d227ad834d" containerID="0dcb4012334df0ca18bc06697c59f4734d9db0d720ae1e5ce634999c0950a952" exitCode=137 Nov 27 17:35:46 crc kubenswrapper[4792]: I1127 17:35:46.823602 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9278b907-83d9-463e-9fd9-41d227ad834d","Type":"ContainerDied","Data":"0dcb4012334df0ca18bc06697c59f4734d9db0d720ae1e5ce634999c0950a952"} Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.359945 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.531209 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-combined-ca-bundle\") pod \"9278b907-83d9-463e-9fd9-41d227ad834d\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.531504 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55pqh\" (UniqueName: \"kubernetes.io/projected/9278b907-83d9-463e-9fd9-41d227ad834d-kube-api-access-55pqh\") pod \"9278b907-83d9-463e-9fd9-41d227ad834d\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.531665 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-config-data\") pod \"9278b907-83d9-463e-9fd9-41d227ad834d\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.531791 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9278b907-83d9-463e-9fd9-41d227ad834d-run-httpd\") pod \"9278b907-83d9-463e-9fd9-41d227ad834d\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.531828 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-sg-core-conf-yaml\") pod \"9278b907-83d9-463e-9fd9-41d227ad834d\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.531852 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9278b907-83d9-463e-9fd9-41d227ad834d-log-httpd\") pod \"9278b907-83d9-463e-9fd9-41d227ad834d\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.531885 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-ceilometer-tls-certs\") pod \"9278b907-83d9-463e-9fd9-41d227ad834d\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.532240 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9278b907-83d9-463e-9fd9-41d227ad834d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9278b907-83d9-463e-9fd9-41d227ad834d" (UID: "9278b907-83d9-463e-9fd9-41d227ad834d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.532326 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9278b907-83d9-463e-9fd9-41d227ad834d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9278b907-83d9-463e-9fd9-41d227ad834d" (UID: "9278b907-83d9-463e-9fd9-41d227ad834d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.532435 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-scripts\") pod \"9278b907-83d9-463e-9fd9-41d227ad834d\" (UID: \"9278b907-83d9-463e-9fd9-41d227ad834d\") " Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.534052 4792 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9278b907-83d9-463e-9fd9-41d227ad834d-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.534075 4792 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9278b907-83d9-463e-9fd9-41d227ad834d-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.536810 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-scripts" (OuterVolumeSpecName: "scripts") pod "9278b907-83d9-463e-9fd9-41d227ad834d" (UID: "9278b907-83d9-463e-9fd9-41d227ad834d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.536981 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9278b907-83d9-463e-9fd9-41d227ad834d-kube-api-access-55pqh" (OuterVolumeSpecName: "kube-api-access-55pqh") pod "9278b907-83d9-463e-9fd9-41d227ad834d" (UID: "9278b907-83d9-463e-9fd9-41d227ad834d"). InnerVolumeSpecName "kube-api-access-55pqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.563624 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9278b907-83d9-463e-9fd9-41d227ad834d" (UID: "9278b907-83d9-463e-9fd9-41d227ad834d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.588539 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9278b907-83d9-463e-9fd9-41d227ad834d" (UID: "9278b907-83d9-463e-9fd9-41d227ad834d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.638353 4792 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.638395 4792 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.638406 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.638415 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55pqh\" (UniqueName: \"kubernetes.io/projected/9278b907-83d9-463e-9fd9-41d227ad834d-kube-api-access-55pqh\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.654840 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9278b907-83d9-463e-9fd9-41d227ad834d" (UID: "9278b907-83d9-463e-9fd9-41d227ad834d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.681128 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-config-data" (OuterVolumeSpecName: "config-data") pod "9278b907-83d9-463e-9fd9-41d227ad834d" (UID: "9278b907-83d9-463e-9fd9-41d227ad834d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.741007 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.741061 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9278b907-83d9-463e-9fd9-41d227ad834d-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.835721 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9278b907-83d9-463e-9fd9-41d227ad834d","Type":"ContainerDied","Data":"5438cc281b25fc15d584599f2180f7e13bfeea647f9167d7f10259d41dcc0737"} Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.835770 4792 scope.go:117] "RemoveContainer" containerID="11e80191bc85ac12b5d8a629217576a9b8ff13833c39101587dd9494d1684a39" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.835804 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.872656 4792 scope.go:117] "RemoveContainer" containerID="1ed95b2c1879420ab9a22969e9deed1b7db8f0ea918f2e4eeb9838cbd48318cd" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.872833 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.883831 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.911674 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:35:47 crc kubenswrapper[4792]: E1127 17:35:47.912354 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7d684b6-6b54-4fea-86da-6b6266a2c2eb" containerName="init" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.912370 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7d684b6-6b54-4fea-86da-6b6266a2c2eb" containerName="init" Nov 27 17:35:47 crc kubenswrapper[4792]: E1127 17:35:47.912377 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7180a433-86ea-495c-9fc8-22583bbabe14" containerName="registry-server" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.912383 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7180a433-86ea-495c-9fc8-22583bbabe14" containerName="registry-server" Nov 27 17:35:47 crc kubenswrapper[4792]: E1127 17:35:47.912399 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7180a433-86ea-495c-9fc8-22583bbabe14" containerName="extract-utilities" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.912405 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7180a433-86ea-495c-9fc8-22583bbabe14" containerName="extract-utilities" Nov 27 17:35:47 crc kubenswrapper[4792]: E1127 17:35:47.912425 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7d684b6-6b54-4fea-86da-6b6266a2c2eb" containerName="dnsmasq-dns" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.912431 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7d684b6-6b54-4fea-86da-6b6266a2c2eb" containerName="dnsmasq-dns" Nov 27 17:35:47 crc kubenswrapper[4792]: E1127 17:35:47.912449 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="ceilometer-central-agent" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.912455 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="ceilometer-central-agent" Nov 27 17:35:47 crc kubenswrapper[4792]: E1127 17:35:47.912467 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="proxy-httpd" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.912474 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="proxy-httpd" Nov 27 17:35:47 crc kubenswrapper[4792]: E1127 17:35:47.912498 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="sg-core" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.912504 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="sg-core" Nov 27 17:35:47 crc kubenswrapper[4792]: E1127 17:35:47.912516 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="ceilometer-notification-agent" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.912523 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="ceilometer-notification-agent" Nov 27 17:35:47 crc kubenswrapper[4792]: E1127 17:35:47.912542 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7180a433-86ea-495c-9fc8-22583bbabe14" containerName="extract-content" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.912548 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7180a433-86ea-495c-9fc8-22583bbabe14" containerName="extract-content" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.912755 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="sg-core" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.912767 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="ceilometer-notification-agent" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.912784 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="proxy-httpd" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.912792 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7d684b6-6b54-4fea-86da-6b6266a2c2eb" containerName="dnsmasq-dns" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.912805 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" containerName="ceilometer-central-agent" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.912820 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7180a433-86ea-495c-9fc8-22583bbabe14" containerName="registry-server" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.914781 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.920019 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.920029 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.920200 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.922460 4792 scope.go:117] "RemoveContainer" containerID="c8f35560cdd6781ef0b02adf95881e7308bf044b6b23ad7c752b803d2fcc242e" Nov 27 17:35:47 crc kubenswrapper[4792]: I1127 17:35:47.931990 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.016275 4792 scope.go:117] "RemoveContainer" containerID="0dcb4012334df0ca18bc06697c59f4734d9db0d720ae1e5ce634999c0950a952" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.052451 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f18974-fb5b-4bb2-906b-9f17d1297b04-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.052592 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f18974-fb5b-4bb2-906b-9f17d1297b04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.052697 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4f18974-fb5b-4bb2-906b-9f17d1297b04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.052723 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f18974-fb5b-4bb2-906b-9f17d1297b04-config-data\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.052764 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f18974-fb5b-4bb2-906b-9f17d1297b04-scripts\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.052810 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4f18974-fb5b-4bb2-906b-9f17d1297b04-run-httpd\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.052828 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4f18974-fb5b-4bb2-906b-9f17d1297b04-log-httpd\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.052851 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2plb\" (UniqueName: \"kubernetes.io/projected/e4f18974-fb5b-4bb2-906b-9f17d1297b04-kube-api-access-j2plb\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.161981 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f18974-fb5b-4bb2-906b-9f17d1297b04-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.162086 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f18974-fb5b-4bb2-906b-9f17d1297b04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.162152 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4f18974-fb5b-4bb2-906b-9f17d1297b04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.162177 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f18974-fb5b-4bb2-906b-9f17d1297b04-config-data\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.162213 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f18974-fb5b-4bb2-906b-9f17d1297b04-scripts\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.162255 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4f18974-fb5b-4bb2-906b-9f17d1297b04-run-httpd\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.162271 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4f18974-fb5b-4bb2-906b-9f17d1297b04-log-httpd\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.162295 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2plb\" (UniqueName: \"kubernetes.io/projected/e4f18974-fb5b-4bb2-906b-9f17d1297b04-kube-api-access-j2plb\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.172949 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4f18974-fb5b-4bb2-906b-9f17d1297b04-log-httpd\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.174230 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4f18974-fb5b-4bb2-906b-9f17d1297b04-run-httpd\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.186624 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f18974-fb5b-4bb2-906b-9f17d1297b04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.188118 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4f18974-fb5b-4bb2-906b-9f17d1297b04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.188351 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4f18974-fb5b-4bb2-906b-9f17d1297b04-scripts\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.188985 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f18974-fb5b-4bb2-906b-9f17d1297b04-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.191940 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2plb\" (UniqueName: \"kubernetes.io/projected/e4f18974-fb5b-4bb2-906b-9f17d1297b04-kube-api-access-j2plb\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.192061 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4f18974-fb5b-4bb2-906b-9f17d1297b04-config-data\") pod \"ceilometer-0\" (UID: \"e4f18974-fb5b-4bb2-906b-9f17d1297b04\") " pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.280971 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.704445 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9278b907-83d9-463e-9fd9-41d227ad834d" path="/var/lib/kubelet/pods/9278b907-83d9-463e-9fd9-41d227ad834d/volumes" Nov 27 17:35:48 crc kubenswrapper[4792]: W1127 17:35:48.814354 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4f18974_fb5b_4bb2_906b_9f17d1297b04.slice/crio-2790329aef2035245e2b7c7d573af8a74500643cf8e89e6650d98dccf51899a8 WatchSource:0}: Error finding container 2790329aef2035245e2b7c7d573af8a74500643cf8e89e6650d98dccf51899a8: Status 404 returned error can't find the container with id 2790329aef2035245e2b7c7d573af8a74500643cf8e89e6650d98dccf51899a8 Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.822050 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.858019 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-44cd2" event={"ID":"7e297bad-8615-4fcd-a43a-4ef82af97714","Type":"ContainerStarted","Data":"8c1e59b9826ef4d622534d5d6a0ce641b6405fceb3b61bfc99920b1db5bff0a3"} Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.860718 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4f18974-fb5b-4bb2-906b-9f17d1297b04","Type":"ContainerStarted","Data":"2790329aef2035245e2b7c7d573af8a74500643cf8e89e6650d98dccf51899a8"} Nov 27 17:35:48 crc kubenswrapper[4792]: I1127 17:35:48.875455 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-44cd2" podStartSLOduration=1.838130241 podStartE2EDuration="37.875434933s" podCreationTimestamp="2025-11-27 17:35:11 +0000 UTC" firstStartedPulling="2025-11-27 17:35:11.988040205 +0000 UTC m=+1534.330866523" lastFinishedPulling="2025-11-27 17:35:48.025344907 +0000 UTC m=+1570.368171215" observedRunningTime="2025-11-27 17:35:48.874031069 +0000 UTC m=+1571.216857397" watchObservedRunningTime="2025-11-27 17:35:48.875434933 +0000 UTC m=+1571.218261251" Nov 27 17:35:50 crc kubenswrapper[4792]: I1127 17:35:50.705499 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5596c69fcc-76jbs" Nov 27 17:35:50 crc kubenswrapper[4792]: I1127 17:35:50.820378 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-wxs9b"] Nov 27 17:35:50 crc kubenswrapper[4792]: I1127 17:35:50.820669 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" podUID="5353a78d-2b26-479b-aff1-9e871769dd58" containerName="dnsmasq-dns" containerID="cri-o://b8c95a3a721ec3b5850978f8f9543fac94d759e071d36ffb77084bdafb02790d" gracePeriod=10 Nov 27 17:35:50 crc kubenswrapper[4792]: I1127 17:35:50.886550 4792 generic.go:334] "Generic (PLEG): container finished" podID="7e297bad-8615-4fcd-a43a-4ef82af97714" containerID="8c1e59b9826ef4d622534d5d6a0ce641b6405fceb3b61bfc99920b1db5bff0a3" exitCode=0 Nov 27 17:35:50 crc kubenswrapper[4792]: I1127 17:35:50.886593 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-44cd2" event={"ID":"7e297bad-8615-4fcd-a43a-4ef82af97714","Type":"ContainerDied","Data":"8c1e59b9826ef4d622534d5d6a0ce641b6405fceb3b61bfc99920b1db5bff0a3"} Nov 27 17:35:51 crc kubenswrapper[4792]: I1127 17:35:51.901701 4792 generic.go:334] "Generic (PLEG): container finished" podID="5353a78d-2b26-479b-aff1-9e871769dd58" containerID="b8c95a3a721ec3b5850978f8f9543fac94d759e071d36ffb77084bdafb02790d" exitCode=0 Nov 27 17:35:51 crc kubenswrapper[4792]: I1127 17:35:51.901750 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" event={"ID":"5353a78d-2b26-479b-aff1-9e871769dd58","Type":"ContainerDied","Data":"b8c95a3a721ec3b5850978f8f9543fac94d759e071d36ffb77084bdafb02790d"} Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.284642 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-44cd2" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.400712 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e297bad-8615-4fcd-a43a-4ef82af97714-combined-ca-bundle\") pod \"7e297bad-8615-4fcd-a43a-4ef82af97714\" (UID: \"7e297bad-8615-4fcd-a43a-4ef82af97714\") " Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.400760 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh99l\" (UniqueName: \"kubernetes.io/projected/7e297bad-8615-4fcd-a43a-4ef82af97714-kube-api-access-xh99l\") pod \"7e297bad-8615-4fcd-a43a-4ef82af97714\" (UID: \"7e297bad-8615-4fcd-a43a-4ef82af97714\") " Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.400880 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e297bad-8615-4fcd-a43a-4ef82af97714-config-data\") pod \"7e297bad-8615-4fcd-a43a-4ef82af97714\" (UID: \"7e297bad-8615-4fcd-a43a-4ef82af97714\") " Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.406425 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e297bad-8615-4fcd-a43a-4ef82af97714-kube-api-access-xh99l" (OuterVolumeSpecName: "kube-api-access-xh99l") pod "7e297bad-8615-4fcd-a43a-4ef82af97714" (UID: "7e297bad-8615-4fcd-a43a-4ef82af97714"). InnerVolumeSpecName "kube-api-access-xh99l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.425916 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.471864 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e297bad-8615-4fcd-a43a-4ef82af97714-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e297bad-8615-4fcd-a43a-4ef82af97714" (UID: "7e297bad-8615-4fcd-a43a-4ef82af97714"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.502630 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-config\") pod \"5353a78d-2b26-479b-aff1-9e871769dd58\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.502752 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-dns-swift-storage-0\") pod \"5353a78d-2b26-479b-aff1-9e871769dd58\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.502773 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzg4d\" (UniqueName: \"kubernetes.io/projected/5353a78d-2b26-479b-aff1-9e871769dd58-kube-api-access-bzg4d\") pod \"5353a78d-2b26-479b-aff1-9e871769dd58\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.502873 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-ovsdbserver-sb\") pod \"5353a78d-2b26-479b-aff1-9e871769dd58\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.503155 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-ovsdbserver-nb\") pod \"5353a78d-2b26-479b-aff1-9e871769dd58\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.503191 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-openstack-edpm-ipam\") pod \"5353a78d-2b26-479b-aff1-9e871769dd58\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.503222 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-dns-svc\") pod \"5353a78d-2b26-479b-aff1-9e871769dd58\" (UID: \"5353a78d-2b26-479b-aff1-9e871769dd58\") " Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.503794 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e297bad-8615-4fcd-a43a-4ef82af97714-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.503815 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh99l\" (UniqueName: \"kubernetes.io/projected/7e297bad-8615-4fcd-a43a-4ef82af97714-kube-api-access-xh99l\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.522966 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5353a78d-2b26-479b-aff1-9e871769dd58-kube-api-access-bzg4d" (OuterVolumeSpecName: "kube-api-access-bzg4d") pod "5353a78d-2b26-479b-aff1-9e871769dd58" (UID: "5353a78d-2b26-479b-aff1-9e871769dd58"). InnerVolumeSpecName "kube-api-access-bzg4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.566382 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e297bad-8615-4fcd-a43a-4ef82af97714-config-data" (OuterVolumeSpecName: "config-data") pod "7e297bad-8615-4fcd-a43a-4ef82af97714" (UID: "7e297bad-8615-4fcd-a43a-4ef82af97714"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.573570 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5353a78d-2b26-479b-aff1-9e871769dd58" (UID: "5353a78d-2b26-479b-aff1-9e871769dd58"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.581121 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5353a78d-2b26-479b-aff1-9e871769dd58" (UID: "5353a78d-2b26-479b-aff1-9e871769dd58"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.600405 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-config" (OuterVolumeSpecName: "config") pod "5353a78d-2b26-479b-aff1-9e871769dd58" (UID: "5353a78d-2b26-479b-aff1-9e871769dd58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.609761 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5353a78d-2b26-479b-aff1-9e871769dd58" (UID: "5353a78d-2b26-479b-aff1-9e871769dd58"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.610233 4792 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-config\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.610288 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzg4d\" (UniqueName: \"kubernetes.io/projected/5353a78d-2b26-479b-aff1-9e871769dd58-kube-api-access-bzg4d\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.610301 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.610315 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e297bad-8615-4fcd-a43a-4ef82af97714-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.610326 4792 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.610338 4792 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.616245 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5353a78d-2b26-479b-aff1-9e871769dd58" (UID: "5353a78d-2b26-479b-aff1-9e871769dd58"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.625913 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5353a78d-2b26-479b-aff1-9e871769dd58" (UID: "5353a78d-2b26-479b-aff1-9e871769dd58"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.712865 4792 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.712903 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5353a78d-2b26-479b-aff1-9e871769dd58-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.936765 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-44cd2" event={"ID":"7e297bad-8615-4fcd-a43a-4ef82af97714","Type":"ContainerDied","Data":"1c5ae616983a171d9a11a5c99f82d59a84ebdc801ca789eb42bb1f9e81dc7530"} Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.936820 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c5ae616983a171d9a11a5c99f82d59a84ebdc801ca789eb42bb1f9e81dc7530" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.936825 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-44cd2" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.938145 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4f18974-fb5b-4bb2-906b-9f17d1297b04","Type":"ContainerStarted","Data":"e745e5a4fbb44731d60c13fc77761e66a7cb90ae122743b1383e59b261e1cd1f"} Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.940541 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" event={"ID":"5353a78d-2b26-479b-aff1-9e871769dd58","Type":"ContainerDied","Data":"9c0919bf753d0d4f6fcbc0bb66da6c88f15a13fb61a6c3b5931de96b4a8418dd"} Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.940590 4792 scope.go:117] "RemoveContainer" containerID="b8c95a3a721ec3b5850978f8f9543fac94d759e071d36ffb77084bdafb02790d" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.940659 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-wxs9b" Nov 27 17:35:53 crc kubenswrapper[4792]: I1127 17:35:53.973984 4792 scope.go:117] "RemoveContainer" containerID="0126b7798ed5f7ad8967196fe4be3a82462651055067da627081dbb7c7d67510" Nov 27 17:35:54 crc kubenswrapper[4792]: I1127 17:35:54.005433 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-wxs9b"] Nov 27 17:35:54 crc kubenswrapper[4792]: I1127 17:35:54.023373 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-wxs9b"] Nov 27 17:35:54 crc kubenswrapper[4792]: I1127 17:35:54.708153 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5353a78d-2b26-479b-aff1-9e871769dd58" path="/var/lib/kubelet/pods/5353a78d-2b26-479b-aff1-9e871769dd58/volumes" Nov 27 17:35:54 crc kubenswrapper[4792]: I1127 17:35:54.957183 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4f18974-fb5b-4bb2-906b-9f17d1297b04","Type":"ContainerStarted","Data":"d7425d8cdca71b71be3091ffac793ad78f192ebfd3621d896fc715fb10ad2370"} Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.686822 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:35:55 crc kubenswrapper[4792]: E1127 17:35:55.687536 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.738782 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-695947b5db-q2kr8"] Nov 27 17:35:55 crc kubenswrapper[4792]: E1127 17:35:55.739361 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e297bad-8615-4fcd-a43a-4ef82af97714" containerName="heat-db-sync" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.739382 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e297bad-8615-4fcd-a43a-4ef82af97714" containerName="heat-db-sync" Nov 27 17:35:55 crc kubenswrapper[4792]: E1127 17:35:55.739403 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5353a78d-2b26-479b-aff1-9e871769dd58" containerName="init" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.739411 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5353a78d-2b26-479b-aff1-9e871769dd58" containerName="init" Nov 27 17:35:55 crc kubenswrapper[4792]: E1127 17:35:55.739469 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5353a78d-2b26-479b-aff1-9e871769dd58" containerName="dnsmasq-dns" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.739477 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5353a78d-2b26-479b-aff1-9e871769dd58" containerName="dnsmasq-dns" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.739772 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e297bad-8615-4fcd-a43a-4ef82af97714" containerName="heat-db-sync" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.739801 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="5353a78d-2b26-479b-aff1-9e871769dd58" containerName="dnsmasq-dns" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.740827 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.754968 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-695947b5db-q2kr8"] Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.771098 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9603abd5-f9a5-4ace-9d0f-652992d6de1e-config-data\") pod \"heat-engine-695947b5db-q2kr8\" (UID: \"9603abd5-f9a5-4ace-9d0f-652992d6de1e\") " pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.771232 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9603abd5-f9a5-4ace-9d0f-652992d6de1e-combined-ca-bundle\") pod \"heat-engine-695947b5db-q2kr8\" (UID: \"9603abd5-f9a5-4ace-9d0f-652992d6de1e\") " pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.771277 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9603abd5-f9a5-4ace-9d0f-652992d6de1e-config-data-custom\") pod \"heat-engine-695947b5db-q2kr8\" (UID: \"9603abd5-f9a5-4ace-9d0f-652992d6de1e\") " pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.771357 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txsmq\" (UniqueName: \"kubernetes.io/projected/9603abd5-f9a5-4ace-9d0f-652992d6de1e-kube-api-access-txsmq\") pod \"heat-engine-695947b5db-q2kr8\" (UID: \"9603abd5-f9a5-4ace-9d0f-652992d6de1e\") " pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.802916 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-f54bc59f4-fb7f4"] Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.804671 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.814834 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-f54bc59f4-fb7f4"] Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.869344 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6cc9cdbfc-zr59q"] Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.871276 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.873192 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9603abd5-f9a5-4ace-9d0f-652992d6de1e-config-data-custom\") pod \"heat-engine-695947b5db-q2kr8\" (UID: \"9603abd5-f9a5-4ace-9d0f-652992d6de1e\") " pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.873317 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txsmq\" (UniqueName: \"kubernetes.io/projected/9603abd5-f9a5-4ace-9d0f-652992d6de1e-kube-api-access-txsmq\") pod \"heat-engine-695947b5db-q2kr8\" (UID: \"9603abd5-f9a5-4ace-9d0f-652992d6de1e\") " pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.873345 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7c59788-726a-4159-91a6-766cad09ff7d-public-tls-certs\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.873375 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7c59788-726a-4159-91a6-766cad09ff7d-config-data-custom\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.873417 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tbbq\" (UniqueName: \"kubernetes.io/projected/e7c59788-726a-4159-91a6-766cad09ff7d-kube-api-access-4tbbq\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.873442 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c59788-726a-4159-91a6-766cad09ff7d-config-data\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.873508 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7c59788-726a-4159-91a6-766cad09ff7d-internal-tls-certs\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.873562 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9603abd5-f9a5-4ace-9d0f-652992d6de1e-config-data\") pod \"heat-engine-695947b5db-q2kr8\" (UID: \"9603abd5-f9a5-4ace-9d0f-652992d6de1e\") " pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.873595 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c59788-726a-4159-91a6-766cad09ff7d-combined-ca-bundle\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.873668 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9603abd5-f9a5-4ace-9d0f-652992d6de1e-combined-ca-bundle\") pod \"heat-engine-695947b5db-q2kr8\" (UID: \"9603abd5-f9a5-4ace-9d0f-652992d6de1e\") " pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.881240 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9603abd5-f9a5-4ace-9d0f-652992d6de1e-combined-ca-bundle\") pod \"heat-engine-695947b5db-q2kr8\" (UID: \"9603abd5-f9a5-4ace-9d0f-652992d6de1e\") " pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.885797 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6cc9cdbfc-zr59q"] Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.888348 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9603abd5-f9a5-4ace-9d0f-652992d6de1e-config-data-custom\") pod \"heat-engine-695947b5db-q2kr8\" (UID: \"9603abd5-f9a5-4ace-9d0f-652992d6de1e\") " pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.889984 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txsmq\" (UniqueName: \"kubernetes.io/projected/9603abd5-f9a5-4ace-9d0f-652992d6de1e-kube-api-access-txsmq\") pod \"heat-engine-695947b5db-q2kr8\" (UID: \"9603abd5-f9a5-4ace-9d0f-652992d6de1e\") " pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.891599 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9603abd5-f9a5-4ace-9d0f-652992d6de1e-config-data\") pod \"heat-engine-695947b5db-q2kr8\" (UID: \"9603abd5-f9a5-4ace-9d0f-652992d6de1e\") " pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.975382 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7c59788-726a-4159-91a6-766cad09ff7d-public-tls-certs\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.975440 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7c59788-726a-4159-91a6-766cad09ff7d-config-data-custom\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.975490 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tbbq\" (UniqueName: \"kubernetes.io/projected/e7c59788-726a-4159-91a6-766cad09ff7d-kube-api-access-4tbbq\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.975518 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c59788-726a-4159-91a6-766cad09ff7d-config-data\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.975584 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7c59788-726a-4159-91a6-766cad09ff7d-internal-tls-certs\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.975629 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02e83c23-359e-428f-acab-41d6912a84ab-config-data-custom\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.975679 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e83c23-359e-428f-acab-41d6912a84ab-combined-ca-bundle\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.975728 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c59788-726a-4159-91a6-766cad09ff7d-combined-ca-bundle\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.975769 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e83c23-359e-428f-acab-41d6912a84ab-config-data\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.975841 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2h65\" (UniqueName: \"kubernetes.io/projected/02e83c23-359e-428f-acab-41d6912a84ab-kube-api-access-k2h65\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.975891 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e83c23-359e-428f-acab-41d6912a84ab-internal-tls-certs\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.975919 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e83c23-359e-428f-acab-41d6912a84ab-public-tls-certs\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.978844 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4f18974-fb5b-4bb2-906b-9f17d1297b04","Type":"ContainerStarted","Data":"c23cababc34464801df73b1a9f120c9a71c8eeb9a0d2f3ada8e15a93955c3528"} Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.979008 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7c59788-726a-4159-91a6-766cad09ff7d-public-tls-certs\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.979815 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7c59788-726a-4159-91a6-766cad09ff7d-config-data-custom\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.980331 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c59788-726a-4159-91a6-766cad09ff7d-combined-ca-bundle\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.980621 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7c59788-726a-4159-91a6-766cad09ff7d-internal-tls-certs\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.980752 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c59788-726a-4159-91a6-766cad09ff7d-config-data\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:55 crc kubenswrapper[4792]: I1127 17:35:55.995157 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tbbq\" (UniqueName: \"kubernetes.io/projected/e7c59788-726a-4159-91a6-766cad09ff7d-kube-api-access-4tbbq\") pod \"heat-api-f54bc59f4-fb7f4\" (UID: \"e7c59788-726a-4159-91a6-766cad09ff7d\") " pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.065503 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.079672 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02e83c23-359e-428f-acab-41d6912a84ab-config-data-custom\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.079784 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e83c23-359e-428f-acab-41d6912a84ab-combined-ca-bundle\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.079887 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e83c23-359e-428f-acab-41d6912a84ab-config-data\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.079993 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2h65\" (UniqueName: \"kubernetes.io/projected/02e83c23-359e-428f-acab-41d6912a84ab-kube-api-access-k2h65\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.080077 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e83c23-359e-428f-acab-41d6912a84ab-internal-tls-certs\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.080312 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e83c23-359e-428f-acab-41d6912a84ab-public-tls-certs\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.088789 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e83c23-359e-428f-acab-41d6912a84ab-public-tls-certs\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.089963 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02e83c23-359e-428f-acab-41d6912a84ab-config-data\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.089992 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02e83c23-359e-428f-acab-41d6912a84ab-internal-tls-certs\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.091206 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02e83c23-359e-428f-acab-41d6912a84ab-config-data-custom\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.099812 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02e83c23-359e-428f-acab-41d6912a84ab-combined-ca-bundle\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.100486 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2h65\" (UniqueName: \"kubernetes.io/projected/02e83c23-359e-428f-acab-41d6912a84ab-kube-api-access-k2h65\") pod \"heat-cfnapi-6cc9cdbfc-zr59q\" (UID: \"02e83c23-359e-428f-acab-41d6912a84ab\") " pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.131606 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.361465 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.579795 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-695947b5db-q2kr8"] Nov 27 17:35:56 crc kubenswrapper[4792]: W1127 17:35:56.581811 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9603abd5_f9a5_4ace_9d0f_652992d6de1e.slice/crio-0920987ee643a4b8fa8a41ab6cfdb19cf396e8da4a95b18bdd82a9663ab15774 WatchSource:0}: Error finding container 0920987ee643a4b8fa8a41ab6cfdb19cf396e8da4a95b18bdd82a9663ab15774: Status 404 returned error can't find the container with id 0920987ee643a4b8fa8a41ab6cfdb19cf396e8da4a95b18bdd82a9663ab15774 Nov 27 17:35:56 crc kubenswrapper[4792]: W1127 17:35:56.757810 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7c59788_726a_4159_91a6_766cad09ff7d.slice/crio-4ebe197e839262878cd37288bf5776b147740b4b61fb0356b4503527c5d7b856 WatchSource:0}: Error finding container 4ebe197e839262878cd37288bf5776b147740b4b61fb0356b4503527c5d7b856: Status 404 returned error can't find the container with id 4ebe197e839262878cd37288bf5776b147740b4b61fb0356b4503527c5d7b856 Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.765822 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-f54bc59f4-fb7f4"] Nov 27 17:35:56 crc kubenswrapper[4792]: W1127 17:35:56.948269 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02e83c23_359e_428f_acab_41d6912a84ab.slice/crio-cf36f33943ed10ef378bc2cbd4d2699bb72c25c1469d9890cfac5c57becb84eb WatchSource:0}: Error finding container cf36f33943ed10ef378bc2cbd4d2699bb72c25c1469d9890cfac5c57becb84eb: Status 404 returned error can't find the container with id cf36f33943ed10ef378bc2cbd4d2699bb72c25c1469d9890cfac5c57becb84eb Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.950706 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6cc9cdbfc-zr59q"] Nov 27 17:35:56 crc kubenswrapper[4792]: I1127 17:35:56.991086 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f54bc59f4-fb7f4" event={"ID":"e7c59788-726a-4159-91a6-766cad09ff7d","Type":"ContainerStarted","Data":"4ebe197e839262878cd37288bf5776b147740b4b61fb0356b4503527c5d7b856"} Nov 27 17:35:57 crc kubenswrapper[4792]: I1127 17:35:57.014102 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4f18974-fb5b-4bb2-906b-9f17d1297b04","Type":"ContainerStarted","Data":"0283aedfa4e42dadcbfca6d129db28d408caa5ef3315efa43be5b2f173c7e021"} Nov 27 17:35:57 crc kubenswrapper[4792]: I1127 17:35:57.014198 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 27 17:35:57 crc kubenswrapper[4792]: I1127 17:35:57.016285 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" event={"ID":"02e83c23-359e-428f-acab-41d6912a84ab","Type":"ContainerStarted","Data":"cf36f33943ed10ef378bc2cbd4d2699bb72c25c1469d9890cfac5c57becb84eb"} Nov 27 17:35:57 crc kubenswrapper[4792]: I1127 17:35:57.020129 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-695947b5db-q2kr8" event={"ID":"9603abd5-f9a5-4ace-9d0f-652992d6de1e","Type":"ContainerStarted","Data":"3bb793e40dc9b5950bbeeba9439bff3c121853fb9419212808e4dfd02754f6f5"} Nov 27 17:35:57 crc kubenswrapper[4792]: I1127 17:35:57.020176 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-695947b5db-q2kr8" event={"ID":"9603abd5-f9a5-4ace-9d0f-652992d6de1e","Type":"ContainerStarted","Data":"0920987ee643a4b8fa8a41ab6cfdb19cf396e8da4a95b18bdd82a9663ab15774"} Nov 27 17:35:57 crc kubenswrapper[4792]: I1127 17:35:57.020499 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:35:57 crc kubenswrapper[4792]: I1127 17:35:57.043344 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.368497786 podStartE2EDuration="10.043325703s" podCreationTimestamp="2025-11-27 17:35:47 +0000 UTC" firstStartedPulling="2025-11-27 17:35:48.81768922 +0000 UTC m=+1571.160515538" lastFinishedPulling="2025-11-27 17:35:56.492517137 +0000 UTC m=+1578.835343455" observedRunningTime="2025-11-27 17:35:57.042877582 +0000 UTC m=+1579.385703910" watchObservedRunningTime="2025-11-27 17:35:57.043325703 +0000 UTC m=+1579.386152021" Nov 27 17:35:57 crc kubenswrapper[4792]: I1127 17:35:57.069307 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-695947b5db-q2kr8" podStartSLOduration=2.069287357 podStartE2EDuration="2.069287357s" podCreationTimestamp="2025-11-27 17:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:35:57.063873063 +0000 UTC m=+1579.406699381" watchObservedRunningTime="2025-11-27 17:35:57.069287357 +0000 UTC m=+1579.412113675" Nov 27 17:35:59 crc kubenswrapper[4792]: I1127 17:35:59.053400 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-f54bc59f4-fb7f4" event={"ID":"e7c59788-726a-4159-91a6-766cad09ff7d","Type":"ContainerStarted","Data":"8a64b5b8842e8d970678f68b7d4ffdb6d6457f3c96e4fda3756576f51046e73b"} Nov 27 17:35:59 crc kubenswrapper[4792]: I1127 17:35:59.054348 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:35:59 crc kubenswrapper[4792]: I1127 17:35:59.056669 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" event={"ID":"02e83c23-359e-428f-acab-41d6912a84ab","Type":"ContainerStarted","Data":"b3495e81bc7c44a73d9f5ab5fe8a5b7d2a013130c6522a67fc56ff10073e0aca"} Nov 27 17:35:59 crc kubenswrapper[4792]: I1127 17:35:59.056863 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:35:59 crc kubenswrapper[4792]: I1127 17:35:59.078486 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-f54bc59f4-fb7f4" podStartSLOduration=2.260915605 podStartE2EDuration="4.078468881s" podCreationTimestamp="2025-11-27 17:35:55 +0000 UTC" firstStartedPulling="2025-11-27 17:35:56.75926944 +0000 UTC m=+1579.102095758" lastFinishedPulling="2025-11-27 17:35:58.576822716 +0000 UTC m=+1580.919649034" observedRunningTime="2025-11-27 17:35:59.07438667 +0000 UTC m=+1581.417212988" watchObservedRunningTime="2025-11-27 17:35:59.078468881 +0000 UTC m=+1581.421295199" Nov 27 17:35:59 crc kubenswrapper[4792]: I1127 17:35:59.099919 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" podStartSLOduration=2.478542518 podStartE2EDuration="4.099900303s" podCreationTimestamp="2025-11-27 17:35:55 +0000 UTC" firstStartedPulling="2025-11-27 17:35:56.950921498 +0000 UTC m=+1579.293747816" lastFinishedPulling="2025-11-27 17:35:58.572279283 +0000 UTC m=+1580.915105601" observedRunningTime="2025-11-27 17:35:59.098826397 +0000 UTC m=+1581.441652715" watchObservedRunningTime="2025-11-27 17:35:59.099900303 +0000 UTC m=+1581.442726631" Nov 27 17:36:00 crc kubenswrapper[4792]: I1127 17:36:00.292204 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h4k55"] Nov 27 17:36:00 crc kubenswrapper[4792]: I1127 17:36:00.297710 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:00 crc kubenswrapper[4792]: I1127 17:36:00.326514 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4k55"] Nov 27 17:36:00 crc kubenswrapper[4792]: I1127 17:36:00.405439 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-utilities\") pod \"redhat-marketplace-h4k55\" (UID: \"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b\") " pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:00 crc kubenswrapper[4792]: I1127 17:36:00.406068 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-catalog-content\") pod \"redhat-marketplace-h4k55\" (UID: \"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b\") " pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:00 crc kubenswrapper[4792]: I1127 17:36:00.406295 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr4pr\" (UniqueName: \"kubernetes.io/projected/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-kube-api-access-jr4pr\") pod \"redhat-marketplace-h4k55\" (UID: \"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b\") " pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:00 crc kubenswrapper[4792]: I1127 17:36:00.508337 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-utilities\") pod \"redhat-marketplace-h4k55\" (UID: \"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b\") " pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:00 crc kubenswrapper[4792]: I1127 17:36:00.508488 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-catalog-content\") pod \"redhat-marketplace-h4k55\" (UID: \"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b\") " pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:00 crc kubenswrapper[4792]: I1127 17:36:00.508591 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr4pr\" (UniqueName: \"kubernetes.io/projected/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-kube-api-access-jr4pr\") pod \"redhat-marketplace-h4k55\" (UID: \"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b\") " pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:00 crc kubenswrapper[4792]: I1127 17:36:00.509145 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-utilities\") pod \"redhat-marketplace-h4k55\" (UID: \"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b\") " pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:00 crc kubenswrapper[4792]: I1127 17:36:00.509264 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-catalog-content\") pod \"redhat-marketplace-h4k55\" (UID: \"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b\") " pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:00 crc kubenswrapper[4792]: I1127 17:36:00.550734 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr4pr\" (UniqueName: \"kubernetes.io/projected/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-kube-api-access-jr4pr\") pod \"redhat-marketplace-h4k55\" (UID: \"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b\") " pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:00 crc kubenswrapper[4792]: I1127 17:36:00.633134 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:00 crc kubenswrapper[4792]: I1127 17:36:00.818404 4792 scope.go:117] "RemoveContainer" containerID="ee77d21d98990c8ef21b29587f255577c0c94058f424026a0c7ec8fd34c2522a" Nov 27 17:36:01 crc kubenswrapper[4792]: I1127 17:36:01.282250 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4k55"] Nov 27 17:36:01 crc kubenswrapper[4792]: W1127 17:36:01.291757 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ba04f9b_3f6e_4d72_bb0c_03e85a99f81b.slice/crio-3689ba30a833edc27d76fe3fc9edf311ba5d2310f3e50de7c48d5dabff603fbe WatchSource:0}: Error finding container 3689ba30a833edc27d76fe3fc9edf311ba5d2310f3e50de7c48d5dabff603fbe: Status 404 returned error can't find the container with id 3689ba30a833edc27d76fe3fc9edf311ba5d2310f3e50de7c48d5dabff603fbe Nov 27 17:36:02 crc kubenswrapper[4792]: I1127 17:36:02.090890 4792 generic.go:334] "Generic (PLEG): container finished" podID="2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" containerID="4227c2e23363d2b0cc49a4fb27455d31201b9f7ca2dd1485ad51be9d5894435f" exitCode=0 Nov 27 17:36:02 crc kubenswrapper[4792]: I1127 17:36:02.090995 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4k55" event={"ID":"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b","Type":"ContainerDied","Data":"4227c2e23363d2b0cc49a4fb27455d31201b9f7ca2dd1485ad51be9d5894435f"} Nov 27 17:36:02 crc kubenswrapper[4792]: I1127 17:36:02.091171 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4k55" event={"ID":"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b","Type":"ContainerStarted","Data":"3689ba30a833edc27d76fe3fc9edf311ba5d2310f3e50de7c48d5dabff603fbe"} Nov 27 17:36:04 crc kubenswrapper[4792]: I1127 17:36:04.123219 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4k55" event={"ID":"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b","Type":"ContainerStarted","Data":"7f5e5d5bfe258e84a6017662d372476a4dfd073e1634de59e113ae444b684e27"} Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.138221 4792 generic.go:334] "Generic (PLEG): container finished" podID="2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" containerID="7f5e5d5bfe258e84a6017662d372476a4dfd073e1634de59e113ae444b684e27" exitCode=0 Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.138298 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4k55" event={"ID":"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b","Type":"ContainerDied","Data":"7f5e5d5bfe258e84a6017662d372476a4dfd073e1634de59e113ae444b684e27"} Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.167732 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj"] Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.169839 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.172482 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.172824 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.172900 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.175042 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.251983 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj"] Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.326924 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84k4f\" (UniqueName: \"kubernetes.io/projected/8de61141-d67f-4491-ade9-57da76c018e7-kube-api-access-84k4f\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.327023 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.327133 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.327175 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.430248 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.431120 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.431379 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84k4f\" (UniqueName: \"kubernetes.io/projected/8de61141-d67f-4491-ade9-57da76c018e7-kube-api-access-84k4f\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.431603 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.438536 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.447749 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.448469 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.451491 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84k4f\" (UniqueName: \"kubernetes.io/projected/8de61141-d67f-4491-ade9-57da76c018e7-kube-api-access-84k4f\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:05 crc kubenswrapper[4792]: I1127 17:36:05.494664 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:06 crc kubenswrapper[4792]: I1127 17:36:06.110539 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-695947b5db-q2kr8" Nov 27 17:36:06 crc kubenswrapper[4792]: I1127 17:36:06.173185 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6ccb4649c9-gt6v5"] Nov 27 17:36:06 crc kubenswrapper[4792]: I1127 17:36:06.173457 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6ccb4649c9-gt6v5" podUID="574d8fe9-d9e1-436f-863f-2245cbecd37a" containerName="heat-engine" containerID="cri-o://aa5831d527f7932aff5b65b6acb8669f2cb961eb9d284a840ec60309f866b480" gracePeriod=60 Nov 27 17:36:06 crc kubenswrapper[4792]: I1127 17:36:06.909439 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj"] Nov 27 17:36:07 crc kubenswrapper[4792]: I1127 17:36:07.169164 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4k55" event={"ID":"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b","Type":"ContainerStarted","Data":"e10f563e9be050dd901e64b3d3c25524f77af2acafe61fcc18d3ca74dca5c6f9"} Nov 27 17:36:07 crc kubenswrapper[4792]: I1127 17:36:07.170771 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" event={"ID":"8de61141-d67f-4491-ade9-57da76c018e7","Type":"ContainerStarted","Data":"663f95de2a837cc4b34e9fa8d2c826cea9fcd1466262fbd13d293203c1e0583d"} Nov 27 17:36:07 crc kubenswrapper[4792]: I1127 17:36:07.202824 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h4k55" podStartSLOduration=2.799186227 podStartE2EDuration="7.202783828s" podCreationTimestamp="2025-11-27 17:36:00 +0000 UTC" firstStartedPulling="2025-11-27 17:36:02.093155929 +0000 UTC m=+1584.435982247" lastFinishedPulling="2025-11-27 17:36:06.49675352 +0000 UTC m=+1588.839579848" observedRunningTime="2025-11-27 17:36:07.18634832 +0000 UTC m=+1589.529174668" watchObservedRunningTime="2025-11-27 17:36:07.202783828 +0000 UTC m=+1589.545610136" Nov 27 17:36:07 crc kubenswrapper[4792]: I1127 17:36:07.687008 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:36:07 crc kubenswrapper[4792]: E1127 17:36:07.687487 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:36:08 crc kubenswrapper[4792]: I1127 17:36:08.111374 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6cc9cdbfc-zr59q" Nov 27 17:36:08 crc kubenswrapper[4792]: I1127 17:36:08.182661 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-688c7694b8-5sbvr"] Nov 27 17:36:08 crc kubenswrapper[4792]: I1127 17:36:08.183041 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-688c7694b8-5sbvr" podUID="0dc95085-b126-48ff-b0ea-98682fbf66fd" containerName="heat-cfnapi" containerID="cri-o://57553a92651567d50a8bd4658d3bb851f04958d643bf90406da72197b9a87647" gracePeriod=60 Nov 27 17:36:08 crc kubenswrapper[4792]: I1127 17:36:08.420142 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-f54bc59f4-fb7f4" Nov 27 17:36:08 crc kubenswrapper[4792]: I1127 17:36:08.496201 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7c756f7b9-t4njz"] Nov 27 17:36:08 crc kubenswrapper[4792]: I1127 17:36:08.496478 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7c756f7b9-t4njz" podUID="d88b28ec-1e65-4b0c-b691-9c44bef0ef06" containerName="heat-api" containerID="cri-o://9e17a21ccbb0389e068575269c7a015a6b408876fae596783fcc71ee3bb192fb" gracePeriod=60 Nov 27 17:36:09 crc kubenswrapper[4792]: I1127 17:36:09.200012 4792 generic.go:334] "Generic (PLEG): container finished" podID="6d2993a9-7994-4249-bfd1-acc7b734eb16" containerID="2e1ea603b0791010fb36b4c0919f57d593550478de99a220797b0955e12169b9" exitCode=0 Nov 27 17:36:09 crc kubenswrapper[4792]: I1127 17:36:09.200191 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d2993a9-7994-4249-bfd1-acc7b734eb16","Type":"ContainerDied","Data":"2e1ea603b0791010fb36b4c0919f57d593550478de99a220797b0955e12169b9"} Nov 27 17:36:09 crc kubenswrapper[4792]: I1127 17:36:09.209789 4792 generic.go:334] "Generic (PLEG): container finished" podID="73468e89-af69-44aa-bc4d-66c7e34a8dff" containerID="afffe003ab045eadf1d37a5d0ee3b591bef17e3a1d4c2b7b207bab815c4b39fc" exitCode=0 Nov 27 17:36:09 crc kubenswrapper[4792]: I1127 17:36:09.209839 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73468e89-af69-44aa-bc4d-66c7e34a8dff","Type":"ContainerDied","Data":"afffe003ab045eadf1d37a5d0ee3b591bef17e3a1d4c2b7b207bab815c4b39fc"} Nov 27 17:36:10 crc kubenswrapper[4792]: I1127 17:36:10.223165 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"73468e89-af69-44aa-bc4d-66c7e34a8dff","Type":"ContainerStarted","Data":"eb34a32972330af2114c2ff2934412878d3b7bc5c9bc38e4f3e72798314cd6e3"} Nov 27 17:36:10 crc kubenswrapper[4792]: I1127 17:36:10.224830 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:36:10 crc kubenswrapper[4792]: I1127 17:36:10.225248 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6d2993a9-7994-4249-bfd1-acc7b734eb16","Type":"ContainerStarted","Data":"74451e2ba546ff6efe621c56431bde0cdf830dbec0b039a11be2db6c44e4f9b0"} Nov 27 17:36:10 crc kubenswrapper[4792]: I1127 17:36:10.225522 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 27 17:36:10 crc kubenswrapper[4792]: I1127 17:36:10.289567 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.289547966 podStartE2EDuration="38.289547966s" podCreationTimestamp="2025-11-27 17:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:36:10.285829194 +0000 UTC m=+1592.628655512" watchObservedRunningTime="2025-11-27 17:36:10.289547966 +0000 UTC m=+1592.632374284" Nov 27 17:36:10 crc kubenswrapper[4792]: I1127 17:36:10.297044 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.297027792 podStartE2EDuration="38.297027792s" podCreationTimestamp="2025-11-27 17:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 17:36:10.264092264 +0000 UTC m=+1592.606918582" watchObservedRunningTime="2025-11-27 17:36:10.297027792 +0000 UTC m=+1592.639854110" Nov 27 17:36:10 crc kubenswrapper[4792]: I1127 17:36:10.633512 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:10 crc kubenswrapper[4792]: I1127 17:36:10.633590 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.235733 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-jq7k7"] Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.248622 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-jq7k7"] Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.338263 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-8p9hr"] Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.340464 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.343519 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.352866 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-8p9hr"] Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.413589 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-combined-ca-bundle\") pod \"aodh-db-sync-8p9hr\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.414098 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-config-data\") pod \"aodh-db-sync-8p9hr\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.414248 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-scripts\") pod \"aodh-db-sync-8p9hr\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.414353 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qfxh\" (UniqueName: \"kubernetes.io/projected/798770c9-f0ca-4e64-834f-c7ae9156c93f-kube-api-access-4qfxh\") pod \"aodh-db-sync-8p9hr\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.476793 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-688c7694b8-5sbvr" podUID="0dc95085-b126-48ff-b0ea-98682fbf66fd" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.227:8000/healthcheck\": read tcp 10.217.0.2:57818->10.217.0.227:8000: read: connection reset by peer" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.517599 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-config-data\") pod \"aodh-db-sync-8p9hr\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.517707 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-scripts\") pod \"aodh-db-sync-8p9hr\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.517753 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qfxh\" (UniqueName: \"kubernetes.io/projected/798770c9-f0ca-4e64-834f-c7ae9156c93f-kube-api-access-4qfxh\") pod \"aodh-db-sync-8p9hr\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.517780 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-combined-ca-bundle\") pod \"aodh-db-sync-8p9hr\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.523912 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-combined-ca-bundle\") pod \"aodh-db-sync-8p9hr\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.526445 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-config-data\") pod \"aodh-db-sync-8p9hr\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.543452 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-scripts\") pod \"aodh-db-sync-8p9hr\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.552824 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qfxh\" (UniqueName: \"kubernetes.io/projected/798770c9-f0ca-4e64-834f-c7ae9156c93f-kube-api-access-4qfxh\") pod \"aodh-db-sync-8p9hr\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:11 crc kubenswrapper[4792]: E1127 17:36:11.606633 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa5831d527f7932aff5b65b6acb8669f2cb961eb9d284a840ec60309f866b480" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 27 17:36:11 crc kubenswrapper[4792]: E1127 17:36:11.609576 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa5831d527f7932aff5b65b6acb8669f2cb961eb9d284a840ec60309f866b480" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 27 17:36:11 crc kubenswrapper[4792]: E1127 17:36:11.613101 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa5831d527f7932aff5b65b6acb8669f2cb961eb9d284a840ec60309f866b480" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 27 17:36:11 crc kubenswrapper[4792]: E1127 17:36:11.613140 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6ccb4649c9-gt6v5" podUID="574d8fe9-d9e1-436f-863f-2245cbecd37a" containerName="heat-engine" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.668147 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:11 crc kubenswrapper[4792]: I1127 17:36:11.698899 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-h4k55" podUID="2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" containerName="registry-server" probeResult="failure" output=< Nov 27 17:36:11 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 17:36:11 crc kubenswrapper[4792]: > Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.172199 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.235590 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-combined-ca-bundle\") pod \"0dc95085-b126-48ff-b0ea-98682fbf66fd\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.235675 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-public-tls-certs\") pod \"0dc95085-b126-48ff-b0ea-98682fbf66fd\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.235726 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-config-data-custom\") pod \"0dc95085-b126-48ff-b0ea-98682fbf66fd\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.235787 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-internal-tls-certs\") pod \"0dc95085-b126-48ff-b0ea-98682fbf66fd\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.235805 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58qs4\" (UniqueName: \"kubernetes.io/projected/0dc95085-b126-48ff-b0ea-98682fbf66fd-kube-api-access-58qs4\") pod \"0dc95085-b126-48ff-b0ea-98682fbf66fd\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.235912 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-config-data\") pod \"0dc95085-b126-48ff-b0ea-98682fbf66fd\" (UID: \"0dc95085-b126-48ff-b0ea-98682fbf66fd\") " Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.266554 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0dc95085-b126-48ff-b0ea-98682fbf66fd" (UID: "0dc95085-b126-48ff-b0ea-98682fbf66fd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.276222 4792 generic.go:334] "Generic (PLEG): container finished" podID="d88b28ec-1e65-4b0c-b691-9c44bef0ef06" containerID="9e17a21ccbb0389e068575269c7a015a6b408876fae596783fcc71ee3bb192fb" exitCode=0 Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.276350 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c756f7b9-t4njz" event={"ID":"d88b28ec-1e65-4b0c-b691-9c44bef0ef06","Type":"ContainerDied","Data":"9e17a21ccbb0389e068575269c7a015a6b408876fae596783fcc71ee3bb192fb"} Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.284693 4792 generic.go:334] "Generic (PLEG): container finished" podID="0dc95085-b126-48ff-b0ea-98682fbf66fd" containerID="57553a92651567d50a8bd4658d3bb851f04958d643bf90406da72197b9a87647" exitCode=0 Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.285347 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-688c7694b8-5sbvr" event={"ID":"0dc95085-b126-48ff-b0ea-98682fbf66fd","Type":"ContainerDied","Data":"57553a92651567d50a8bd4658d3bb851f04958d643bf90406da72197b9a87647"} Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.286170 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-688c7694b8-5sbvr" event={"ID":"0dc95085-b126-48ff-b0ea-98682fbf66fd","Type":"ContainerDied","Data":"2df5b3f75bc069eeb60234dfc6484d5113e8bdc489688ad997ba41c4c4349aff"} Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.286388 4792 scope.go:117] "RemoveContainer" containerID="57553a92651567d50a8bd4658d3bb851f04958d643bf90406da72197b9a87647" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.287020 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-688c7694b8-5sbvr" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.289850 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dc95085-b126-48ff-b0ea-98682fbf66fd-kube-api-access-58qs4" (OuterVolumeSpecName: "kube-api-access-58qs4") pod "0dc95085-b126-48ff-b0ea-98682fbf66fd" (UID: "0dc95085-b126-48ff-b0ea-98682fbf66fd"). InnerVolumeSpecName "kube-api-access-58qs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.315826 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0dc95085-b126-48ff-b0ea-98682fbf66fd" (UID: "0dc95085-b126-48ff-b0ea-98682fbf66fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.339777 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0dc95085-b126-48ff-b0ea-98682fbf66fd" (UID: "0dc95085-b126-48ff-b0ea-98682fbf66fd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.341203 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.341230 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.341239 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58qs4\" (UniqueName: \"kubernetes.io/projected/0dc95085-b126-48ff-b0ea-98682fbf66fd-kube-api-access-58qs4\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.341249 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.379161 4792 scope.go:117] "RemoveContainer" containerID="57553a92651567d50a8bd4658d3bb851f04958d643bf90406da72197b9a87647" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.379783 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0dc95085-b126-48ff-b0ea-98682fbf66fd" (UID: "0dc95085-b126-48ff-b0ea-98682fbf66fd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:12 crc kubenswrapper[4792]: E1127 17:36:12.380329 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57553a92651567d50a8bd4658d3bb851f04958d643bf90406da72197b9a87647\": container with ID starting with 57553a92651567d50a8bd4658d3bb851f04958d643bf90406da72197b9a87647 not found: ID does not exist" containerID="57553a92651567d50a8bd4658d3bb851f04958d643bf90406da72197b9a87647" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.380354 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57553a92651567d50a8bd4658d3bb851f04958d643bf90406da72197b9a87647"} err="failed to get container status \"57553a92651567d50a8bd4658d3bb851f04958d643bf90406da72197b9a87647\": rpc error: code = NotFound desc = could not find container \"57553a92651567d50a8bd4658d3bb851f04958d643bf90406da72197b9a87647\": container with ID starting with 57553a92651567d50a8bd4658d3bb851f04958d643bf90406da72197b9a87647 not found: ID does not exist" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.399079 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-config-data" (OuterVolumeSpecName: "config-data") pod "0dc95085-b126-48ff-b0ea-98682fbf66fd" (UID: "0dc95085-b126-48ff-b0ea-98682fbf66fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.415412 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-8p9hr"] Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.451369 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.451399 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dc95085-b126-48ff-b0ea-98682fbf66fd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.543842 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.654885 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-combined-ca-bundle\") pod \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.654943 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-config-data\") pod \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.654969 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-public-tls-certs\") pod \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.655024 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-config-data-custom\") pod \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.655163 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-internal-tls-certs\") pod \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.655293 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pbnk\" (UniqueName: \"kubernetes.io/projected/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-kube-api-access-9pbnk\") pod \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\" (UID: \"d88b28ec-1e65-4b0c-b691-9c44bef0ef06\") " Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.658863 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-688c7694b8-5sbvr"] Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.669170 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-688c7694b8-5sbvr"] Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.681354 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-kube-api-access-9pbnk" (OuterVolumeSpecName: "kube-api-access-9pbnk") pod "d88b28ec-1e65-4b0c-b691-9c44bef0ef06" (UID: "d88b28ec-1e65-4b0c-b691-9c44bef0ef06"). InnerVolumeSpecName "kube-api-access-9pbnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.683352 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d88b28ec-1e65-4b0c-b691-9c44bef0ef06" (UID: "d88b28ec-1e65-4b0c-b691-9c44bef0ef06"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.715302 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dc95085-b126-48ff-b0ea-98682fbf66fd" path="/var/lib/kubelet/pods/0dc95085-b126-48ff-b0ea-98682fbf66fd/volumes" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.717071 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f088e56-5dc8-4c86-b0f8-69ba476e721f" path="/var/lib/kubelet/pods/4f088e56-5dc8-4c86-b0f8-69ba476e721f/volumes" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.758854 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.758895 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pbnk\" (UniqueName: \"kubernetes.io/projected/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-kube-api-access-9pbnk\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.774278 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d88b28ec-1e65-4b0c-b691-9c44bef0ef06" (UID: "d88b28ec-1e65-4b0c-b691-9c44bef0ef06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.809793 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d88b28ec-1e65-4b0c-b691-9c44bef0ef06" (UID: "d88b28ec-1e65-4b0c-b691-9c44bef0ef06"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.839865 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-config-data" (OuterVolumeSpecName: "config-data") pod "d88b28ec-1e65-4b0c-b691-9c44bef0ef06" (UID: "d88b28ec-1e65-4b0c-b691-9c44bef0ef06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.846739 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d88b28ec-1e65-4b0c-b691-9c44bef0ef06" (UID: "d88b28ec-1e65-4b0c-b691-9c44bef0ef06"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.862858 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.862893 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.862905 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:12 crc kubenswrapper[4792]: I1127 17:36:12.862913 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d88b28ec-1e65-4b0c-b691-9c44bef0ef06-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:13 crc kubenswrapper[4792]: I1127 17:36:13.300662 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c756f7b9-t4njz" event={"ID":"d88b28ec-1e65-4b0c-b691-9c44bef0ef06","Type":"ContainerDied","Data":"268a5545eb3bdcc237d8d40fa7e80e25b2daad8364f07c631e400c1a0fdb6e90"} Nov 27 17:36:13 crc kubenswrapper[4792]: I1127 17:36:13.300970 4792 scope.go:117] "RemoveContainer" containerID="9e17a21ccbb0389e068575269c7a015a6b408876fae596783fcc71ee3bb192fb" Nov 27 17:36:13 crc kubenswrapper[4792]: I1127 17:36:13.300854 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c756f7b9-t4njz" Nov 27 17:36:13 crc kubenswrapper[4792]: I1127 17:36:13.303056 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8p9hr" event={"ID":"798770c9-f0ca-4e64-834f-c7ae9156c93f","Type":"ContainerStarted","Data":"537856c53517aaab48e948ef5a3488d60d67fc5384978f45d374588e1310fedd"} Nov 27 17:36:13 crc kubenswrapper[4792]: I1127 17:36:13.343079 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7c756f7b9-t4njz"] Nov 27 17:36:13 crc kubenswrapper[4792]: I1127 17:36:13.353912 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7c756f7b9-t4njz"] Nov 27 17:36:14 crc kubenswrapper[4792]: I1127 17:36:14.703312 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d88b28ec-1e65-4b0c-b691-9c44bef0ef06" path="/var/lib/kubelet/pods/d88b28ec-1e65-4b0c-b691-9c44bef0ef06/volumes" Nov 27 17:36:18 crc kubenswrapper[4792]: I1127 17:36:18.298527 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 27 17:36:19 crc kubenswrapper[4792]: I1127 17:36:19.686783 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:36:19 crc kubenswrapper[4792]: E1127 17:36:19.687475 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:36:20 crc kubenswrapper[4792]: I1127 17:36:20.710562 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:20 crc kubenswrapper[4792]: I1127 17:36:20.784011 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:20 crc kubenswrapper[4792]: I1127 17:36:20.950248 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4k55"] Nov 27 17:36:21 crc kubenswrapper[4792]: E1127 17:36:21.603734 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa5831d527f7932aff5b65b6acb8669f2cb961eb9d284a840ec60309f866b480" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 27 17:36:21 crc kubenswrapper[4792]: E1127 17:36:21.605830 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa5831d527f7932aff5b65b6acb8669f2cb961eb9d284a840ec60309f866b480" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 27 17:36:21 crc kubenswrapper[4792]: E1127 17:36:21.607276 4792 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa5831d527f7932aff5b65b6acb8669f2cb961eb9d284a840ec60309f866b480" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 27 17:36:21 crc kubenswrapper[4792]: E1127 17:36:21.607527 4792 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6ccb4649c9-gt6v5" podUID="574d8fe9-d9e1-436f-863f-2245cbecd37a" containerName="heat-engine" Nov 27 17:36:22 crc kubenswrapper[4792]: I1127 17:36:22.411201 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h4k55" podUID="2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" containerName="registry-server" containerID="cri-o://e10f563e9be050dd901e64b3d3c25524f77af2acafe61fcc18d3ca74dca5c6f9" gracePeriod=2 Nov 27 17:36:23 crc kubenswrapper[4792]: I1127 17:36:23.411115 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 27 17:36:23 crc kubenswrapper[4792]: I1127 17:36:23.423586 4792 generic.go:334] "Generic (PLEG): container finished" podID="2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" containerID="e10f563e9be050dd901e64b3d3c25524f77af2acafe61fcc18d3ca74dca5c6f9" exitCode=0 Nov 27 17:36:23 crc kubenswrapper[4792]: I1127 17:36:23.423632 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4k55" event={"ID":"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b","Type":"ContainerDied","Data":"e10f563e9be050dd901e64b3d3c25524f77af2acafe61fcc18d3ca74dca5c6f9"} Nov 27 17:36:23 crc kubenswrapper[4792]: I1127 17:36:23.607809 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 27 17:36:25 crc kubenswrapper[4792]: E1127 17:36:25.186083 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Nov 27 17:36:25 crc kubenswrapper[4792]: E1127 17:36:25.186423 4792 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Nov 27 17:36:25 crc kubenswrapper[4792]: E1127 17:36:25.186566 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:aodh-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:AodhPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:AodhPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:aodh-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4qfxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42402,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod aodh-db-sync-8p9hr_openstack(798770c9-f0ca-4e64-834f-c7ae9156c93f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 17:36:25 crc kubenswrapper[4792]: E1127 17:36:25.188086 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/aodh-db-sync-8p9hr" podUID="798770c9-f0ca-4e64-834f-c7ae9156c93f" Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.514243 4792 generic.go:334] "Generic (PLEG): container finished" podID="574d8fe9-d9e1-436f-863f-2245cbecd37a" containerID="aa5831d527f7932aff5b65b6acb8669f2cb961eb9d284a840ec60309f866b480" exitCode=0 Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.516417 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6ccb4649c9-gt6v5" event={"ID":"574d8fe9-d9e1-436f-863f-2245cbecd37a","Type":"ContainerDied","Data":"aa5831d527f7932aff5b65b6acb8669f2cb961eb9d284a840ec60309f866b480"} Nov 27 17:36:25 crc kubenswrapper[4792]: E1127 17:36:25.517537 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested\\\"\"" pod="openstack/aodh-db-sync-8p9hr" podUID="798770c9-f0ca-4e64-834f-c7ae9156c93f" Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.669499 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.715634 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr4pr\" (UniqueName: \"kubernetes.io/projected/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-kube-api-access-jr4pr\") pod \"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b\" (UID: \"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b\") " Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.715750 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-catalog-content\") pod \"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b\" (UID: \"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b\") " Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.715807 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-utilities\") pod \"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b\" (UID: \"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b\") " Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.720111 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-utilities" (OuterVolumeSpecName: "utilities") pod "2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" (UID: "2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.726635 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-kube-api-access-jr4pr" (OuterVolumeSpecName: "kube-api-access-jr4pr") pod "2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" (UID: "2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b"). InnerVolumeSpecName "kube-api-access-jr4pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.740090 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" (UID: "2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.820927 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr4pr\" (UniqueName: \"kubernetes.io/projected/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-kube-api-access-jr4pr\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.820969 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.820980 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.865160 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.921985 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-config-data-custom\") pod \"574d8fe9-d9e1-436f-863f-2245cbecd37a\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.922135 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9kjp\" (UniqueName: \"kubernetes.io/projected/574d8fe9-d9e1-436f-863f-2245cbecd37a-kube-api-access-b9kjp\") pod \"574d8fe9-d9e1-436f-863f-2245cbecd37a\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.922213 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-combined-ca-bundle\") pod \"574d8fe9-d9e1-436f-863f-2245cbecd37a\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.922396 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-config-data\") pod \"574d8fe9-d9e1-436f-863f-2245cbecd37a\" (UID: \"574d8fe9-d9e1-436f-863f-2245cbecd37a\") " Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.926316 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574d8fe9-d9e1-436f-863f-2245cbecd37a-kube-api-access-b9kjp" (OuterVolumeSpecName: "kube-api-access-b9kjp") pod "574d8fe9-d9e1-436f-863f-2245cbecd37a" (UID: "574d8fe9-d9e1-436f-863f-2245cbecd37a"). InnerVolumeSpecName "kube-api-access-b9kjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.927010 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "574d8fe9-d9e1-436f-863f-2245cbecd37a" (UID: "574d8fe9-d9e1-436f-863f-2245cbecd37a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:25 crc kubenswrapper[4792]: I1127 17:36:25.972271 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "574d8fe9-d9e1-436f-863f-2245cbecd37a" (UID: "574d8fe9-d9e1-436f-863f-2245cbecd37a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.015021 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-config-data" (OuterVolumeSpecName: "config-data") pod "574d8fe9-d9e1-436f-863f-2245cbecd37a" (UID: "574d8fe9-d9e1-436f-863f-2245cbecd37a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.025986 4792 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.026032 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9kjp\" (UniqueName: \"kubernetes.io/projected/574d8fe9-d9e1-436f-863f-2245cbecd37a-kube-api-access-b9kjp\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.026267 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.026289 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/574d8fe9-d9e1-436f-863f-2245cbecd37a-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.527705 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6ccb4649c9-gt6v5" Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.527691 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6ccb4649c9-gt6v5" event={"ID":"574d8fe9-d9e1-436f-863f-2245cbecd37a","Type":"ContainerDied","Data":"8aff2f1b877037b50141c7c6d4cb0eafbe01b844fe8f067e030cf6f3676a3c93"} Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.528141 4792 scope.go:117] "RemoveContainer" containerID="aa5831d527f7932aff5b65b6acb8669f2cb961eb9d284a840ec60309f866b480" Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.529296 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" event={"ID":"8de61141-d67f-4491-ade9-57da76c018e7","Type":"ContainerStarted","Data":"d15d747392e206526ffeaa3b92341825df1c3d38a97f5a6014cd44c3c11db4ef"} Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.535074 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4k55" event={"ID":"2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b","Type":"ContainerDied","Data":"3689ba30a833edc27d76fe3fc9edf311ba5d2310f3e50de7c48d5dabff603fbe"} Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.535167 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4k55" Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.559192 4792 scope.go:117] "RemoveContainer" containerID="e10f563e9be050dd901e64b3d3c25524f77af2acafe61fcc18d3ca74dca5c6f9" Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.562380 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" podStartSLOduration=3.276447416 podStartE2EDuration="21.562352982s" podCreationTimestamp="2025-11-27 17:36:05 +0000 UTC" firstStartedPulling="2025-11-27 17:36:06.912888351 +0000 UTC m=+1589.255714659" lastFinishedPulling="2025-11-27 17:36:25.198793887 +0000 UTC m=+1607.541620225" observedRunningTime="2025-11-27 17:36:26.554462616 +0000 UTC m=+1608.897288944" watchObservedRunningTime="2025-11-27 17:36:26.562352982 +0000 UTC m=+1608.905179300" Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.584783 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4k55"] Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.596317 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4k55"] Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.597476 4792 scope.go:117] "RemoveContainer" containerID="7f5e5d5bfe258e84a6017662d372476a4dfd073e1634de59e113ae444b684e27" Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.614575 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6ccb4649c9-gt6v5"] Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.626015 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6ccb4649c9-gt6v5"] Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.626718 4792 scope.go:117] "RemoveContainer" containerID="4227c2e23363d2b0cc49a4fb27455d31201b9f7ca2dd1485ad51be9d5894435f" Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.712219 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" path="/var/lib/kubelet/pods/2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b/volumes" Nov 27 17:36:26 crc kubenswrapper[4792]: I1127 17:36:26.714012 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574d8fe9-d9e1-436f-863f-2245cbecd37a" path="/var/lib/kubelet/pods/574d8fe9-d9e1-436f-863f-2245cbecd37a/volumes" Nov 27 17:36:30 crc kubenswrapper[4792]: I1127 17:36:30.687590 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:36:30 crc kubenswrapper[4792]: E1127 17:36:30.688693 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:36:37 crc kubenswrapper[4792]: E1127 17:36:37.536859 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8de61141_d67f_4491_ade9_57da76c018e7.slice/crio-d15d747392e206526ffeaa3b92341825df1c3d38a97f5a6014cd44c3c11db4ef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8de61141_d67f_4491_ade9_57da76c018e7.slice/crio-conmon-d15d747392e206526ffeaa3b92341825df1c3d38a97f5a6014cd44c3c11db4ef.scope\": RecentStats: unable to find data in memory cache]" Nov 27 17:36:37 crc kubenswrapper[4792]: I1127 17:36:37.706832 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8p9hr" event={"ID":"798770c9-f0ca-4e64-834f-c7ae9156c93f","Type":"ContainerStarted","Data":"6b01ea71f9ada9af9f6f1437fe826461e0997751700ad581ec6b642b2cadedaa"} Nov 27 17:36:37 crc kubenswrapper[4792]: I1127 17:36:37.720581 4792 generic.go:334] "Generic (PLEG): container finished" podID="8de61141-d67f-4491-ade9-57da76c018e7" containerID="d15d747392e206526ffeaa3b92341825df1c3d38a97f5a6014cd44c3c11db4ef" exitCode=0 Nov 27 17:36:37 crc kubenswrapper[4792]: I1127 17:36:37.720773 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" event={"ID":"8de61141-d67f-4491-ade9-57da76c018e7","Type":"ContainerDied","Data":"d15d747392e206526ffeaa3b92341825df1c3d38a97f5a6014cd44c3c11db4ef"} Nov 27 17:36:37 crc kubenswrapper[4792]: I1127 17:36:37.732344 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-8p9hr" podStartSLOduration=2.086005496 podStartE2EDuration="26.732320739s" podCreationTimestamp="2025-11-27 17:36:11 +0000 UTC" firstStartedPulling="2025-11-27 17:36:12.41838563 +0000 UTC m=+1594.761211948" lastFinishedPulling="2025-11-27 17:36:37.064700833 +0000 UTC m=+1619.407527191" observedRunningTime="2025-11-27 17:36:37.728355341 +0000 UTC m=+1620.071181689" watchObservedRunningTime="2025-11-27 17:36:37.732320739 +0000 UTC m=+1620.075147057" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.289339 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.385819 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-repo-setup-combined-ca-bundle\") pod \"8de61141-d67f-4491-ade9-57da76c018e7\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.385924 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-inventory\") pod \"8de61141-d67f-4491-ade9-57da76c018e7\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.386005 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84k4f\" (UniqueName: \"kubernetes.io/projected/8de61141-d67f-4491-ade9-57da76c018e7-kube-api-access-84k4f\") pod \"8de61141-d67f-4491-ade9-57da76c018e7\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.386054 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-ssh-key\") pod \"8de61141-d67f-4491-ade9-57da76c018e7\" (UID: \"8de61141-d67f-4491-ade9-57da76c018e7\") " Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.392060 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8de61141-d67f-4491-ade9-57da76c018e7" (UID: "8de61141-d67f-4491-ade9-57da76c018e7"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.396807 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de61141-d67f-4491-ade9-57da76c018e7-kube-api-access-84k4f" (OuterVolumeSpecName: "kube-api-access-84k4f") pod "8de61141-d67f-4491-ade9-57da76c018e7" (UID: "8de61141-d67f-4491-ade9-57da76c018e7"). InnerVolumeSpecName "kube-api-access-84k4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.419862 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-inventory" (OuterVolumeSpecName: "inventory") pod "8de61141-d67f-4491-ade9-57da76c018e7" (UID: "8de61141-d67f-4491-ade9-57da76c018e7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.420001 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8de61141-d67f-4491-ade9-57da76c018e7" (UID: "8de61141-d67f-4491-ade9-57da76c018e7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.488661 4792 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.488694 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.488704 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84k4f\" (UniqueName: \"kubernetes.io/projected/8de61141-d67f-4491-ade9-57da76c018e7-kube-api-access-84k4f\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.488714 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8de61141-d67f-4491-ade9-57da76c018e7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.744784 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" event={"ID":"8de61141-d67f-4491-ade9-57da76c018e7","Type":"ContainerDied","Data":"663f95de2a837cc4b34e9fa8d2c826cea9fcd1466262fbd13d293203c1e0583d"} Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.745023 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="663f95de2a837cc4b34e9fa8d2c826cea9fcd1466262fbd13d293203c1e0583d" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.744869 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.842854 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54"] Nov 27 17:36:39 crc kubenswrapper[4792]: E1127 17:36:39.843808 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" containerName="extract-utilities" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.843849 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" containerName="extract-utilities" Nov 27 17:36:39 crc kubenswrapper[4792]: E1127 17:36:39.843872 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc95085-b126-48ff-b0ea-98682fbf66fd" containerName="heat-cfnapi" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.843885 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc95085-b126-48ff-b0ea-98682fbf66fd" containerName="heat-cfnapi" Nov 27 17:36:39 crc kubenswrapper[4792]: E1127 17:36:39.843929 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" containerName="extract-content" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.843943 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" containerName="extract-content" Nov 27 17:36:39 crc kubenswrapper[4792]: E1127 17:36:39.843975 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" containerName="registry-server" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.843987 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" containerName="registry-server" Nov 27 17:36:39 crc kubenswrapper[4792]: E1127 17:36:39.844010 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88b28ec-1e65-4b0c-b691-9c44bef0ef06" containerName="heat-api" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.844024 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88b28ec-1e65-4b0c-b691-9c44bef0ef06" containerName="heat-api" Nov 27 17:36:39 crc kubenswrapper[4792]: E1127 17:36:39.844065 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de61141-d67f-4491-ade9-57da76c018e7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.844078 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de61141-d67f-4491-ade9-57da76c018e7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 27 17:36:39 crc kubenswrapper[4792]: E1127 17:36:39.844109 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574d8fe9-d9e1-436f-863f-2245cbecd37a" containerName="heat-engine" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.844121 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="574d8fe9-d9e1-436f-863f-2245cbecd37a" containerName="heat-engine" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.844506 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dc95085-b126-48ff-b0ea-98682fbf66fd" containerName="heat-cfnapi" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.844542 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba04f9b-3f6e-4d72-bb0c-03e85a99f81b" containerName="registry-server" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.844568 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8de61141-d67f-4491-ade9-57da76c018e7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.844596 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="574d8fe9-d9e1-436f-863f-2245cbecd37a" containerName="heat-engine" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.844634 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88b28ec-1e65-4b0c-b691-9c44bef0ef06" containerName="heat-api" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.846029 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.848329 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.848782 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.849058 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.849321 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.856235 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54"] Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.897724 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/70c3419b-b42e-42f5-be83-4de5d0e38566-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mdb54\" (UID: \"70c3419b-b42e-42f5-be83-4de5d0e38566\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.897859 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70c3419b-b42e-42f5-be83-4de5d0e38566-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mdb54\" (UID: \"70c3419b-b42e-42f5-be83-4de5d0e38566\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" Nov 27 17:36:39 crc kubenswrapper[4792]: I1127 17:36:39.897902 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5kln\" (UniqueName: \"kubernetes.io/projected/70c3419b-b42e-42f5-be83-4de5d0e38566-kube-api-access-g5kln\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mdb54\" (UID: \"70c3419b-b42e-42f5-be83-4de5d0e38566\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" Nov 27 17:36:40 crc kubenswrapper[4792]: I1127 17:36:39.999908 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70c3419b-b42e-42f5-be83-4de5d0e38566-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mdb54\" (UID: \"70c3419b-b42e-42f5-be83-4de5d0e38566\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" Nov 27 17:36:40 crc kubenswrapper[4792]: I1127 17:36:39.999976 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5kln\" (UniqueName: \"kubernetes.io/projected/70c3419b-b42e-42f5-be83-4de5d0e38566-kube-api-access-g5kln\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mdb54\" (UID: \"70c3419b-b42e-42f5-be83-4de5d0e38566\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" Nov 27 17:36:40 crc kubenswrapper[4792]: I1127 17:36:40.000252 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/70c3419b-b42e-42f5-be83-4de5d0e38566-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mdb54\" (UID: \"70c3419b-b42e-42f5-be83-4de5d0e38566\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" Nov 27 17:36:40 crc kubenswrapper[4792]: I1127 17:36:40.004214 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/70c3419b-b42e-42f5-be83-4de5d0e38566-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mdb54\" (UID: \"70c3419b-b42e-42f5-be83-4de5d0e38566\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" Nov 27 17:36:40 crc kubenswrapper[4792]: I1127 17:36:40.004599 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70c3419b-b42e-42f5-be83-4de5d0e38566-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mdb54\" (UID: \"70c3419b-b42e-42f5-be83-4de5d0e38566\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" Nov 27 17:36:40 crc kubenswrapper[4792]: I1127 17:36:40.019195 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5kln\" (UniqueName: \"kubernetes.io/projected/70c3419b-b42e-42f5-be83-4de5d0e38566-kube-api-access-g5kln\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mdb54\" (UID: \"70c3419b-b42e-42f5-be83-4de5d0e38566\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" Nov 27 17:36:40 crc kubenswrapper[4792]: I1127 17:36:40.174395 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" Nov 27 17:36:40 crc kubenswrapper[4792]: I1127 17:36:40.736401 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54"] Nov 27 17:36:40 crc kubenswrapper[4792]: I1127 17:36:40.759385 4792 generic.go:334] "Generic (PLEG): container finished" podID="798770c9-f0ca-4e64-834f-c7ae9156c93f" containerID="6b01ea71f9ada9af9f6f1437fe826461e0997751700ad581ec6b642b2cadedaa" exitCode=0 Nov 27 17:36:40 crc kubenswrapper[4792]: I1127 17:36:40.759462 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8p9hr" event={"ID":"798770c9-f0ca-4e64-834f-c7ae9156c93f","Type":"ContainerDied","Data":"6b01ea71f9ada9af9f6f1437fe826461e0997751700ad581ec6b642b2cadedaa"} Nov 27 17:36:40 crc kubenswrapper[4792]: I1127 17:36:40.761077 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" event={"ID":"70c3419b-b42e-42f5-be83-4de5d0e38566","Type":"ContainerStarted","Data":"6fb7ffe037393688f4ddec1bfddc52e3e19427c525c25d60eb25173485413b5e"} Nov 27 17:36:41 crc kubenswrapper[4792]: I1127 17:36:41.777703 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" event={"ID":"70c3419b-b42e-42f5-be83-4de5d0e38566","Type":"ContainerStarted","Data":"39a04732be2586976fef53b5562f20d64bd5055e7074de5b5e1a0ea26ba62274"} Nov 27 17:36:41 crc kubenswrapper[4792]: I1127 17:36:41.800985 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" podStartSLOduration=2.339698962 podStartE2EDuration="2.800962794s" podCreationTimestamp="2025-11-27 17:36:39 +0000 UTC" firstStartedPulling="2025-11-27 17:36:40.727152034 +0000 UTC m=+1623.069978352" lastFinishedPulling="2025-11-27 17:36:41.188415826 +0000 UTC m=+1623.531242184" observedRunningTime="2025-11-27 17:36:41.798604976 +0000 UTC m=+1624.141431314" watchObservedRunningTime="2025-11-27 17:36:41.800962794 +0000 UTC m=+1624.143789102" Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.277175 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.374677 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-scripts\") pod \"798770c9-f0ca-4e64-834f-c7ae9156c93f\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.374835 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qfxh\" (UniqueName: \"kubernetes.io/projected/798770c9-f0ca-4e64-834f-c7ae9156c93f-kube-api-access-4qfxh\") pod \"798770c9-f0ca-4e64-834f-c7ae9156c93f\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.375445 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-config-data\") pod \"798770c9-f0ca-4e64-834f-c7ae9156c93f\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.375935 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-combined-ca-bundle\") pod \"798770c9-f0ca-4e64-834f-c7ae9156c93f\" (UID: \"798770c9-f0ca-4e64-834f-c7ae9156c93f\") " Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.380506 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-scripts" (OuterVolumeSpecName: "scripts") pod "798770c9-f0ca-4e64-834f-c7ae9156c93f" (UID: "798770c9-f0ca-4e64-834f-c7ae9156c93f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.380847 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798770c9-f0ca-4e64-834f-c7ae9156c93f-kube-api-access-4qfxh" (OuterVolumeSpecName: "kube-api-access-4qfxh") pod "798770c9-f0ca-4e64-834f-c7ae9156c93f" (UID: "798770c9-f0ca-4e64-834f-c7ae9156c93f"). InnerVolumeSpecName "kube-api-access-4qfxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.406874 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "798770c9-f0ca-4e64-834f-c7ae9156c93f" (UID: "798770c9-f0ca-4e64-834f-c7ae9156c93f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.407462 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-config-data" (OuterVolumeSpecName: "config-data") pod "798770c9-f0ca-4e64-834f-c7ae9156c93f" (UID: "798770c9-f0ca-4e64-834f-c7ae9156c93f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.479490 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.479545 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.479566 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qfxh\" (UniqueName: \"kubernetes.io/projected/798770c9-f0ca-4e64-834f-c7ae9156c93f-kube-api-access-4qfxh\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.479586 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/798770c9-f0ca-4e64-834f-c7ae9156c93f-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.791745 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-8p9hr" Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.791775 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-8p9hr" event={"ID":"798770c9-f0ca-4e64-834f-c7ae9156c93f","Type":"ContainerDied","Data":"537856c53517aaab48e948ef5a3488d60d67fc5384978f45d374588e1310fedd"} Nov 27 17:36:42 crc kubenswrapper[4792]: I1127 17:36:42.792208 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="537856c53517aaab48e948ef5a3488d60d67fc5384978f45d374588e1310fedd" Nov 27 17:36:43 crc kubenswrapper[4792]: I1127 17:36:43.687263 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:36:43 crc kubenswrapper[4792]: E1127 17:36:43.687691 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:36:44 crc kubenswrapper[4792]: I1127 17:36:44.824799 4792 generic.go:334] "Generic (PLEG): container finished" podID="70c3419b-b42e-42f5-be83-4de5d0e38566" containerID="39a04732be2586976fef53b5562f20d64bd5055e7074de5b5e1a0ea26ba62274" exitCode=0 Nov 27 17:36:44 crc kubenswrapper[4792]: I1127 17:36:44.825301 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" event={"ID":"70c3419b-b42e-42f5-be83-4de5d0e38566","Type":"ContainerDied","Data":"39a04732be2586976fef53b5562f20d64bd5055e7074de5b5e1a0ea26ba62274"} Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.298935 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.370306 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.370637 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-api" containerID="cri-o://1ef7a1e5844ab1f9445bbb0c16c2610237ada53ba51bb79275db18f2b26c1e6b" gracePeriod=30 Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.371223 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-listener" containerID="cri-o://ac9fb9e5908311fe5a378f2c7d163c8bb3114982c12e158c33c1507e8ca479fd" gracePeriod=30 Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.371297 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-notifier" containerID="cri-o://ad22d44a6d8beceebcfcb49ddb6cd0c68d4477b691402937500ce834132dab60" gracePeriod=30 Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.371349 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-evaluator" containerID="cri-o://3c10aa265ef8396c4e48cb4df30d2f4dc6d22f609b5c501b8cbe352be9772642" gracePeriod=30 Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.378518 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/70c3419b-b42e-42f5-be83-4de5d0e38566-ssh-key\") pod \"70c3419b-b42e-42f5-be83-4de5d0e38566\" (UID: \"70c3419b-b42e-42f5-be83-4de5d0e38566\") " Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.378675 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70c3419b-b42e-42f5-be83-4de5d0e38566-inventory\") pod \"70c3419b-b42e-42f5-be83-4de5d0e38566\" (UID: \"70c3419b-b42e-42f5-be83-4de5d0e38566\") " Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.378772 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5kln\" (UniqueName: \"kubernetes.io/projected/70c3419b-b42e-42f5-be83-4de5d0e38566-kube-api-access-g5kln\") pod \"70c3419b-b42e-42f5-be83-4de5d0e38566\" (UID: \"70c3419b-b42e-42f5-be83-4de5d0e38566\") " Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.403982 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c3419b-b42e-42f5-be83-4de5d0e38566-kube-api-access-g5kln" (OuterVolumeSpecName: "kube-api-access-g5kln") pod "70c3419b-b42e-42f5-be83-4de5d0e38566" (UID: "70c3419b-b42e-42f5-be83-4de5d0e38566"). InnerVolumeSpecName "kube-api-access-g5kln". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.410751 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c3419b-b42e-42f5-be83-4de5d0e38566-inventory" (OuterVolumeSpecName: "inventory") pod "70c3419b-b42e-42f5-be83-4de5d0e38566" (UID: "70c3419b-b42e-42f5-be83-4de5d0e38566"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.423078 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c3419b-b42e-42f5-be83-4de5d0e38566-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "70c3419b-b42e-42f5-be83-4de5d0e38566" (UID: "70c3419b-b42e-42f5-be83-4de5d0e38566"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.488608 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/70c3419b-b42e-42f5-be83-4de5d0e38566-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.488667 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70c3419b-b42e-42f5-be83-4de5d0e38566-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.488681 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5kln\" (UniqueName: \"kubernetes.io/projected/70c3419b-b42e-42f5-be83-4de5d0e38566-kube-api-access-g5kln\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.852958 4792 generic.go:334] "Generic (PLEG): container finished" podID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerID="3c10aa265ef8396c4e48cb4df30d2f4dc6d22f609b5c501b8cbe352be9772642" exitCode=0 Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.853007 4792 generic.go:334] "Generic (PLEG): container finished" podID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerID="1ef7a1e5844ab1f9445bbb0c16c2610237ada53ba51bb79275db18f2b26c1e6b" exitCode=0 Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.853028 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"35f335bf-9584-4205-8bab-e1f8b83cf0db","Type":"ContainerDied","Data":"3c10aa265ef8396c4e48cb4df30d2f4dc6d22f609b5c501b8cbe352be9772642"} Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.853085 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"35f335bf-9584-4205-8bab-e1f8b83cf0db","Type":"ContainerDied","Data":"1ef7a1e5844ab1f9445bbb0c16c2610237ada53ba51bb79275db18f2b26c1e6b"} Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.855403 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" event={"ID":"70c3419b-b42e-42f5-be83-4de5d0e38566","Type":"ContainerDied","Data":"6fb7ffe037393688f4ddec1bfddc52e3e19427c525c25d60eb25173485413b5e"} Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.855444 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fb7ffe037393688f4ddec1bfddc52e3e19427c525c25d60eb25173485413b5e" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.855448 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mdb54" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.906668 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g"] Nov 27 17:36:46 crc kubenswrapper[4792]: E1127 17:36:46.907172 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c3419b-b42e-42f5-be83-4de5d0e38566" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.907189 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c3419b-b42e-42f5-be83-4de5d0e38566" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 27 17:36:46 crc kubenswrapper[4792]: E1127 17:36:46.907210 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798770c9-f0ca-4e64-834f-c7ae9156c93f" containerName="aodh-db-sync" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.907218 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="798770c9-f0ca-4e64-834f-c7ae9156c93f" containerName="aodh-db-sync" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.907451 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="798770c9-f0ca-4e64-834f-c7ae9156c93f" containerName="aodh-db-sync" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.907469 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c3419b-b42e-42f5-be83-4de5d0e38566" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.908222 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.910488 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.910550 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.910691 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.911180 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:36:46 crc kubenswrapper[4792]: I1127 17:36:46.930215 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g"] Nov 27 17:36:47 crc kubenswrapper[4792]: I1127 17:36:47.001187 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:36:47 crc kubenswrapper[4792]: I1127 17:36:47.001290 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:36:47 crc kubenswrapper[4792]: I1127 17:36:47.001405 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:36:47 crc kubenswrapper[4792]: I1127 17:36:47.001547 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx7d2\" (UniqueName: \"kubernetes.io/projected/cfb67295-f5ab-48cb-acae-25420d9d77f4-kube-api-access-rx7d2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:36:47 crc kubenswrapper[4792]: I1127 17:36:47.103620 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:36:47 crc kubenswrapper[4792]: I1127 17:36:47.103933 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx7d2\" (UniqueName: \"kubernetes.io/projected/cfb67295-f5ab-48cb-acae-25420d9d77f4-kube-api-access-rx7d2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:36:47 crc kubenswrapper[4792]: I1127 17:36:47.104113 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:36:47 crc kubenswrapper[4792]: I1127 17:36:47.104206 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:36:47 crc kubenswrapper[4792]: I1127 17:36:47.108137 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:36:47 crc kubenswrapper[4792]: I1127 17:36:47.108526 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:36:47 crc kubenswrapper[4792]: I1127 17:36:47.109314 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:36:47 crc kubenswrapper[4792]: I1127 17:36:47.122273 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx7d2\" (UniqueName: \"kubernetes.io/projected/cfb67295-f5ab-48cb-acae-25420d9d77f4-kube-api-access-rx7d2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:36:47 crc kubenswrapper[4792]: I1127 17:36:47.225569 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:36:47 crc kubenswrapper[4792]: I1127 17:36:47.806711 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g"] Nov 27 17:36:47 crc kubenswrapper[4792]: I1127 17:36:47.869739 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" event={"ID":"cfb67295-f5ab-48cb-acae-25420d9d77f4","Type":"ContainerStarted","Data":"adbcbfe61c5a25fb36f8c3f4b88780b4b482d62c071b348aae12d4dce3edad47"} Nov 27 17:36:50 crc kubenswrapper[4792]: I1127 17:36:50.905374 4792 generic.go:334] "Generic (PLEG): container finished" podID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerID="ac9fb9e5908311fe5a378f2c7d163c8bb3114982c12e158c33c1507e8ca479fd" exitCode=0 Nov 27 17:36:50 crc kubenswrapper[4792]: I1127 17:36:50.905635 4792 generic.go:334] "Generic (PLEG): container finished" podID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerID="ad22d44a6d8beceebcfcb49ddb6cd0c68d4477b691402937500ce834132dab60" exitCode=0 Nov 27 17:36:50 crc kubenswrapper[4792]: I1127 17:36:50.905457 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"35f335bf-9584-4205-8bab-e1f8b83cf0db","Type":"ContainerDied","Data":"ac9fb9e5908311fe5a378f2c7d163c8bb3114982c12e158c33c1507e8ca479fd"} Nov 27 17:36:50 crc kubenswrapper[4792]: I1127 17:36:50.905697 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"35f335bf-9584-4205-8bab-e1f8b83cf0db","Type":"ContainerDied","Data":"ad22d44a6d8beceebcfcb49ddb6cd0c68d4477b691402937500ce834132dab60"} Nov 27 17:36:50 crc kubenswrapper[4792]: I1127 17:36:50.905715 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"35f335bf-9584-4205-8bab-e1f8b83cf0db","Type":"ContainerDied","Data":"eac4d299465886c9a4b6fd69aa61c10b41028437614efc5836638ca0a92d87b2"} Nov 27 17:36:50 crc kubenswrapper[4792]: I1127 17:36:50.905726 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eac4d299465886c9a4b6fd69aa61c10b41028437614efc5836638ca0a92d87b2" Nov 27 17:36:50 crc kubenswrapper[4792]: I1127 17:36:50.953085 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.109360 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-config-data\") pod \"35f335bf-9584-4205-8bab-e1f8b83cf0db\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.109668 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-combined-ca-bundle\") pod \"35f335bf-9584-4205-8bab-e1f8b83cf0db\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.109739 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjh4r\" (UniqueName: \"kubernetes.io/projected/35f335bf-9584-4205-8bab-e1f8b83cf0db-kube-api-access-qjh4r\") pod \"35f335bf-9584-4205-8bab-e1f8b83cf0db\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.109935 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-internal-tls-certs\") pod \"35f335bf-9584-4205-8bab-e1f8b83cf0db\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.109974 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-scripts\") pod \"35f335bf-9584-4205-8bab-e1f8b83cf0db\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.110095 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-public-tls-certs\") pod \"35f335bf-9584-4205-8bab-e1f8b83cf0db\" (UID: \"35f335bf-9584-4205-8bab-e1f8b83cf0db\") " Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.118887 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f335bf-9584-4205-8bab-e1f8b83cf0db-kube-api-access-qjh4r" (OuterVolumeSpecName: "kube-api-access-qjh4r") pod "35f335bf-9584-4205-8bab-e1f8b83cf0db" (UID: "35f335bf-9584-4205-8bab-e1f8b83cf0db"). InnerVolumeSpecName "kube-api-access-qjh4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.147140 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-scripts" (OuterVolumeSpecName: "scripts") pod "35f335bf-9584-4205-8bab-e1f8b83cf0db" (UID: "35f335bf-9584-4205-8bab-e1f8b83cf0db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.187957 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "35f335bf-9584-4205-8bab-e1f8b83cf0db" (UID: "35f335bf-9584-4205-8bab-e1f8b83cf0db"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.217787 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjh4r\" (UniqueName: \"kubernetes.io/projected/35f335bf-9584-4205-8bab-e1f8b83cf0db-kube-api-access-qjh4r\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.217826 4792 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-scripts\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.217840 4792 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.223568 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "35f335bf-9584-4205-8bab-e1f8b83cf0db" (UID: "35f335bf-9584-4205-8bab-e1f8b83cf0db"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.275082 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35f335bf-9584-4205-8bab-e1f8b83cf0db" (UID: "35f335bf-9584-4205-8bab-e1f8b83cf0db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.297842 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-config-data" (OuterVolumeSpecName: "config-data") pod "35f335bf-9584-4205-8bab-e1f8b83cf0db" (UID: "35f335bf-9584-4205-8bab-e1f8b83cf0db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.320197 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.320268 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.320286 4792 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f335bf-9584-4205-8bab-e1f8b83cf0db-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 27 17:36:51 crc kubenswrapper[4792]: I1127 17:36:51.934134 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.008107 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.025371 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.037416 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 27 17:36:52 crc kubenswrapper[4792]: E1127 17:36:52.038489 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-listener" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.038522 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-listener" Nov 27 17:36:52 crc kubenswrapper[4792]: E1127 17:36:52.038551 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-evaluator" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.038567 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-evaluator" Nov 27 17:36:52 crc kubenswrapper[4792]: E1127 17:36:52.038634 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-api" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.038696 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-api" Nov 27 17:36:52 crc kubenswrapper[4792]: E1127 17:36:52.038744 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-notifier" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.038757 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-notifier" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.039339 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-api" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.039382 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-listener" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.039416 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-evaluator" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.039471 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" containerName="aodh-notifier" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.044733 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.047702 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.062700 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.063096 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.063342 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.063598 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.063950 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-vlns7" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.151074 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e40c25-6ce7-4631-9877-a7c983c966f7-internal-tls-certs\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.151196 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4bjj\" (UniqueName: \"kubernetes.io/projected/b5e40c25-6ce7-4631-9877-a7c983c966f7-kube-api-access-p4bjj\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.151233 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5e40c25-6ce7-4631-9877-a7c983c966f7-scripts\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.151382 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e40c25-6ce7-4631-9877-a7c983c966f7-config-data\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.151409 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e40c25-6ce7-4631-9877-a7c983c966f7-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.151452 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e40c25-6ce7-4631-9877-a7c983c966f7-public-tls-certs\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.253547 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4bjj\" (UniqueName: \"kubernetes.io/projected/b5e40c25-6ce7-4631-9877-a7c983c966f7-kube-api-access-p4bjj\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.253605 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5e40c25-6ce7-4631-9877-a7c983c966f7-scripts\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.253755 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e40c25-6ce7-4631-9877-a7c983c966f7-config-data\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.253809 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e40c25-6ce7-4631-9877-a7c983c966f7-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.253853 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e40c25-6ce7-4631-9877-a7c983c966f7-public-tls-certs\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.253954 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e40c25-6ce7-4631-9877-a7c983c966f7-internal-tls-certs\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.259898 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5e40c25-6ce7-4631-9877-a7c983c966f7-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.261460 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5e40c25-6ce7-4631-9877-a7c983c966f7-config-data\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.262021 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5e40c25-6ce7-4631-9877-a7c983c966f7-scripts\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.262276 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e40c25-6ce7-4631-9877-a7c983c966f7-internal-tls-certs\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.262770 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5e40c25-6ce7-4631-9877-a7c983c966f7-public-tls-certs\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.287048 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4bjj\" (UniqueName: \"kubernetes.io/projected/b5e40c25-6ce7-4631-9877-a7c983c966f7-kube-api-access-p4bjj\") pod \"aodh-0\" (UID: \"b5e40c25-6ce7-4631-9877-a7c983c966f7\") " pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.390534 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.708209 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f335bf-9584-4205-8bab-e1f8b83cf0db" path="/var/lib/kubelet/pods/35f335bf-9584-4205-8bab-e1f8b83cf0db/volumes" Nov 27 17:36:52 crc kubenswrapper[4792]: W1127 17:36:52.922626 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5e40c25_6ce7_4631_9877_a7c983c966f7.slice/crio-e3e2d02dbb405eb934724b617019f602eb7017892972a05257b1beea67084faf WatchSource:0}: Error finding container e3e2d02dbb405eb934724b617019f602eb7017892972a05257b1beea67084faf: Status 404 returned error can't find the container with id e3e2d02dbb405eb934724b617019f602eb7017892972a05257b1beea67084faf Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.925588 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.947682 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" event={"ID":"cfb67295-f5ab-48cb-acae-25420d9d77f4","Type":"ContainerStarted","Data":"e9112cb2092f50937bf18c517b86354a9b020b5425b5b9ce83e67d60726a7c02"} Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.949620 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b5e40c25-6ce7-4631-9877-a7c983c966f7","Type":"ContainerStarted","Data":"e3e2d02dbb405eb934724b617019f602eb7017892972a05257b1beea67084faf"} Nov 27 17:36:52 crc kubenswrapper[4792]: I1127 17:36:52.963043 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" podStartSLOduration=2.215009089 podStartE2EDuration="6.963023732s" podCreationTimestamp="2025-11-27 17:36:46 +0000 UTC" firstStartedPulling="2025-11-27 17:36:47.816316561 +0000 UTC m=+1630.159142879" lastFinishedPulling="2025-11-27 17:36:52.564331204 +0000 UTC m=+1634.907157522" observedRunningTime="2025-11-27 17:36:52.959761071 +0000 UTC m=+1635.302587389" watchObservedRunningTime="2025-11-27 17:36:52.963023732 +0000 UTC m=+1635.305850050" Nov 27 17:36:53 crc kubenswrapper[4792]: I1127 17:36:53.964687 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b5e40c25-6ce7-4631-9877-a7c983c966f7","Type":"ContainerStarted","Data":"cd7eb5818386807739a7a16caf487e909dbcb984265ef32051ddd0f915341cc3"} Nov 27 17:36:54 crc kubenswrapper[4792]: I1127 17:36:54.980328 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b5e40c25-6ce7-4631-9877-a7c983c966f7","Type":"ContainerStarted","Data":"7ad4b7aed195ceddc04ffc3e4a7e9bca23a135ce32af15f5ffe1987ce3ceb71e"} Nov 27 17:36:55 crc kubenswrapper[4792]: I1127 17:36:55.990961 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b5e40c25-6ce7-4631-9877-a7c983c966f7","Type":"ContainerStarted","Data":"3309baffb4a7fabf780de6fb52f49d910409a774366b3c9de58170f21a4f9216"} Nov 27 17:36:57 crc kubenswrapper[4792]: I1127 17:36:57.003172 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b5e40c25-6ce7-4631-9877-a7c983c966f7","Type":"ContainerStarted","Data":"2d41e86acf52495ac992ebcff0eca31be0f0b2c312df728ec516dfd8405536e1"} Nov 27 17:36:58 crc kubenswrapper[4792]: I1127 17:36:58.696359 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:36:58 crc kubenswrapper[4792]: E1127 17:36:58.697309 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:37:01 crc kubenswrapper[4792]: I1127 17:37:01.138803 4792 scope.go:117] "RemoveContainer" containerID="25577c162a3055549c432017c68328aafaa545ebd2530ef5c7d10af5e2556c7c" Nov 27 17:37:09 crc kubenswrapper[4792]: I1127 17:37:09.686503 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:37:09 crc kubenswrapper[4792]: E1127 17:37:09.687355 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:37:21 crc kubenswrapper[4792]: I1127 17:37:21.687742 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:37:21 crc kubenswrapper[4792]: E1127 17:37:21.688790 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:37:34 crc kubenswrapper[4792]: I1127 17:37:34.687288 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:37:34 crc kubenswrapper[4792]: E1127 17:37:34.688161 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:37:49 crc kubenswrapper[4792]: I1127 17:37:49.722104 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:37:49 crc kubenswrapper[4792]: E1127 17:37:49.723344 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:38:00 crc kubenswrapper[4792]: I1127 17:38:00.687177 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:38:00 crc kubenswrapper[4792]: E1127 17:38:00.688533 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:38:01 crc kubenswrapper[4792]: I1127 17:38:01.339002 4792 scope.go:117] "RemoveContainer" containerID="504af4e2e96f24dfdc95937394da32df2753de9c7f77c8304925934911c42b15" Nov 27 17:38:01 crc kubenswrapper[4792]: I1127 17:38:01.365804 4792 scope.go:117] "RemoveContainer" containerID="081346cc5ad6b978d5675d4860468b68fc702db31b8fb05002ba8d49b31e1629" Nov 27 17:38:01 crc kubenswrapper[4792]: I1127 17:38:01.415312 4792 scope.go:117] "RemoveContainer" containerID="2455a6f689166ee43f49fdb87d34ae25094bf9bd53c1809dd5c56d426bfa4325" Nov 27 17:38:01 crc kubenswrapper[4792]: I1127 17:38:01.468908 4792 scope.go:117] "RemoveContainer" containerID="bd1041b51280eda217136c8218c55d59a70365228887278f1c67a961cbe62098" Nov 27 17:38:13 crc kubenswrapper[4792]: I1127 17:38:13.688052 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:38:13 crc kubenswrapper[4792]: E1127 17:38:13.689215 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:38:28 crc kubenswrapper[4792]: I1127 17:38:28.694668 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:38:28 crc kubenswrapper[4792]: E1127 17:38:28.695548 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:38:41 crc kubenswrapper[4792]: I1127 17:38:41.686965 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:38:41 crc kubenswrapper[4792]: E1127 17:38:41.687822 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:38:55 crc kubenswrapper[4792]: I1127 17:38:55.687749 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:38:55 crc kubenswrapper[4792]: E1127 17:38:55.690631 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:39:06 crc kubenswrapper[4792]: I1127 17:39:06.689519 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:39:06 crc kubenswrapper[4792]: E1127 17:39:06.691121 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:39:18 crc kubenswrapper[4792]: I1127 17:39:18.710234 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:39:18 crc kubenswrapper[4792]: E1127 17:39:18.711372 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:39:32 crc kubenswrapper[4792]: I1127 17:39:32.687284 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:39:32 crc kubenswrapper[4792]: E1127 17:39:32.688450 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:39:44 crc kubenswrapper[4792]: I1127 17:39:44.686919 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:39:44 crc kubenswrapper[4792]: E1127 17:39:44.688934 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:39:56 crc kubenswrapper[4792]: I1127 17:39:56.687046 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:39:56 crc kubenswrapper[4792]: E1127 17:39:56.687955 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:40:01 crc kubenswrapper[4792]: I1127 17:40:01.634509 4792 scope.go:117] "RemoveContainer" containerID="5d7d7a8fb0e99d62d68cd6ee9bbef4bff13f2fea7ec50c92e54379e9a7c68d09" Nov 27 17:40:08 crc kubenswrapper[4792]: I1127 17:40:08.640105 4792 generic.go:334] "Generic (PLEG): container finished" podID="cfb67295-f5ab-48cb-acae-25420d9d77f4" containerID="e9112cb2092f50937bf18c517b86354a9b020b5425b5b9ce83e67d60726a7c02" exitCode=0 Nov 27 17:40:08 crc kubenswrapper[4792]: I1127 17:40:08.640186 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" event={"ID":"cfb67295-f5ab-48cb-acae-25420d9d77f4","Type":"ContainerDied","Data":"e9112cb2092f50937bf18c517b86354a9b020b5425b5b9ce83e67d60726a7c02"} Nov 27 17:40:08 crc kubenswrapper[4792]: I1127 17:40:08.674386 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=193.992555433 podStartE2EDuration="3m17.674367014s" podCreationTimestamp="2025-11-27 17:36:51 +0000 UTC" firstStartedPulling="2025-11-27 17:36:52.925964772 +0000 UTC m=+1635.268791090" lastFinishedPulling="2025-11-27 17:36:56.607776353 +0000 UTC m=+1638.950602671" observedRunningTime="2025-11-27 17:36:57.046267329 +0000 UTC m=+1639.389093657" watchObservedRunningTime="2025-11-27 17:40:08.674367014 +0000 UTC m=+1831.017193332" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.162859 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.281618 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx7d2\" (UniqueName: \"kubernetes.io/projected/cfb67295-f5ab-48cb-acae-25420d9d77f4-kube-api-access-rx7d2\") pod \"cfb67295-f5ab-48cb-acae-25420d9d77f4\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.281935 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-inventory\") pod \"cfb67295-f5ab-48cb-acae-25420d9d77f4\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.282050 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-bootstrap-combined-ca-bundle\") pod \"cfb67295-f5ab-48cb-acae-25420d9d77f4\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.282957 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-ssh-key\") pod \"cfb67295-f5ab-48cb-acae-25420d9d77f4\" (UID: \"cfb67295-f5ab-48cb-acae-25420d9d77f4\") " Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.289760 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfb67295-f5ab-48cb-acae-25420d9d77f4-kube-api-access-rx7d2" (OuterVolumeSpecName: "kube-api-access-rx7d2") pod "cfb67295-f5ab-48cb-acae-25420d9d77f4" (UID: "cfb67295-f5ab-48cb-acae-25420d9d77f4"). InnerVolumeSpecName "kube-api-access-rx7d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.291925 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "cfb67295-f5ab-48cb-acae-25420d9d77f4" (UID: "cfb67295-f5ab-48cb-acae-25420d9d77f4"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.318716 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-inventory" (OuterVolumeSpecName: "inventory") pod "cfb67295-f5ab-48cb-acae-25420d9d77f4" (UID: "cfb67295-f5ab-48cb-acae-25420d9d77f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.327990 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cfb67295-f5ab-48cb-acae-25420d9d77f4" (UID: "cfb67295-f5ab-48cb-acae-25420d9d77f4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.386882 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.386907 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx7d2\" (UniqueName: \"kubernetes.io/projected/cfb67295-f5ab-48cb-acae-25420d9d77f4-kube-api-access-rx7d2\") on node \"crc\" DevicePath \"\"" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.386917 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.386927 4792 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb67295-f5ab-48cb-acae-25420d9d77f4-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.664000 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" event={"ID":"cfb67295-f5ab-48cb-acae-25420d9d77f4","Type":"ContainerDied","Data":"adbcbfe61c5a25fb36f8c3f4b88780b4b482d62c071b348aae12d4dce3edad47"} Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.664044 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adbcbfe61c5a25fb36f8c3f4b88780b4b482d62c071b348aae12d4dce3edad47" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.664075 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.687270 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.778093 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc"] Nov 27 17:40:10 crc kubenswrapper[4792]: E1127 17:40:10.778752 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb67295-f5ab-48cb-acae-25420d9d77f4" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.778777 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb67295-f5ab-48cb-acae-25420d9d77f4" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.779134 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfb67295-f5ab-48cb-acae-25420d9d77f4" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.780209 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.783116 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.783199 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.783124 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.784753 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.812455 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc"] Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.903817 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-87lxc\" (UID: \"2b8542bf-b789-4e1a-9ff9-5375dc57cc94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.904174 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7x79\" (UniqueName: \"kubernetes.io/projected/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-kube-api-access-s7x79\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-87lxc\" (UID: \"2b8542bf-b789-4e1a-9ff9-5375dc57cc94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" Nov 27 17:40:10 crc kubenswrapper[4792]: I1127 17:40:10.904667 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-87lxc\" (UID: \"2b8542bf-b789-4e1a-9ff9-5375dc57cc94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" Nov 27 17:40:11 crc kubenswrapper[4792]: I1127 17:40:11.007682 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7x79\" (UniqueName: \"kubernetes.io/projected/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-kube-api-access-s7x79\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-87lxc\" (UID: \"2b8542bf-b789-4e1a-9ff9-5375dc57cc94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" Nov 27 17:40:11 crc kubenswrapper[4792]: I1127 17:40:11.008226 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-87lxc\" (UID: \"2b8542bf-b789-4e1a-9ff9-5375dc57cc94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" Nov 27 17:40:11 crc kubenswrapper[4792]: I1127 17:40:11.008398 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-87lxc\" (UID: \"2b8542bf-b789-4e1a-9ff9-5375dc57cc94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" Nov 27 17:40:11 crc kubenswrapper[4792]: I1127 17:40:11.013379 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-87lxc\" (UID: \"2b8542bf-b789-4e1a-9ff9-5375dc57cc94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" Nov 27 17:40:11 crc kubenswrapper[4792]: I1127 17:40:11.013481 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-87lxc\" (UID: \"2b8542bf-b789-4e1a-9ff9-5375dc57cc94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" Nov 27 17:40:11 crc kubenswrapper[4792]: I1127 17:40:11.028235 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7x79\" (UniqueName: \"kubernetes.io/projected/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-kube-api-access-s7x79\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-87lxc\" (UID: \"2b8542bf-b789-4e1a-9ff9-5375dc57cc94\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" Nov 27 17:40:11 crc kubenswrapper[4792]: I1127 17:40:11.179837 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" Nov 27 17:40:11 crc kubenswrapper[4792]: I1127 17:40:11.684323 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"8a39d10190b610649d44c98afe3563275dbc74e7d629b3db59b3d9af2418ae45"} Nov 27 17:40:11 crc kubenswrapper[4792]: I1127 17:40:11.764671 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc"] Nov 27 17:40:11 crc kubenswrapper[4792]: I1127 17:40:11.771123 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 17:40:12 crc kubenswrapper[4792]: I1127 17:40:12.703950 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" event={"ID":"2b8542bf-b789-4e1a-9ff9-5375dc57cc94","Type":"ContainerStarted","Data":"8d8fd167085ff54ecab4adad7a8964cbcad1f78f4520955f8005d1bae0057193"} Nov 27 17:40:13 crc kubenswrapper[4792]: I1127 17:40:13.710278 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" event={"ID":"2b8542bf-b789-4e1a-9ff9-5375dc57cc94","Type":"ContainerStarted","Data":"4e5de50d4bb16b67d03503ca9cbdf676fa6a1a37b3d9c27bc9f3c4adbceee36c"} Nov 27 17:40:13 crc kubenswrapper[4792]: I1127 17:40:13.745807 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" podStartSLOduration=2.982990565 podStartE2EDuration="3.745780859s" podCreationTimestamp="2025-11-27 17:40:10 +0000 UTC" firstStartedPulling="2025-11-27 17:40:11.770928592 +0000 UTC m=+1834.113754910" lastFinishedPulling="2025-11-27 17:40:12.533718876 +0000 UTC m=+1834.876545204" observedRunningTime="2025-11-27 17:40:13.733319291 +0000 UTC m=+1836.076145599" watchObservedRunningTime="2025-11-27 17:40:13.745780859 +0000 UTC m=+1836.088607177" Nov 27 17:40:17 crc kubenswrapper[4792]: I1127 17:40:17.066486 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-pktkm"] Nov 27 17:40:17 crc kubenswrapper[4792]: I1127 17:40:17.080454 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-pktkm"] Nov 27 17:40:18 crc kubenswrapper[4792]: I1127 17:40:18.045857 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-81fd-account-create-update-9tmsm"] Nov 27 17:40:18 crc kubenswrapper[4792]: I1127 17:40:18.061595 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-81fd-account-create-update-9tmsm"] Nov 27 17:40:18 crc kubenswrapper[4792]: I1127 17:40:18.708627 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c3345f6-9d76-4087-a0ad-037e8ee66a87" path="/var/lib/kubelet/pods/0c3345f6-9d76-4087-a0ad-037e8ee66a87/volumes" Nov 27 17:40:18 crc kubenswrapper[4792]: I1127 17:40:18.709895 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f52348a4-2e60-4f12-a73b-c70e7134dc0f" path="/var/lib/kubelet/pods/f52348a4-2e60-4f12-a73b-c70e7134dc0f/volumes" Nov 27 17:40:22 crc kubenswrapper[4792]: I1127 17:40:22.039325 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f"] Nov 27 17:40:22 crc kubenswrapper[4792]: I1127 17:40:22.050126 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-ctg6f"] Nov 27 17:40:22 crc kubenswrapper[4792]: I1127 17:40:22.706802 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c410408-137e-4436-b401-2e9b2de55ac1" path="/var/lib/kubelet/pods/9c410408-137e-4436-b401-2e9b2de55ac1/volumes" Nov 27 17:40:23 crc kubenswrapper[4792]: I1127 17:40:23.050899 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-b4b8-account-create-update-xdc22"] Nov 27 17:40:23 crc kubenswrapper[4792]: I1127 17:40:23.064811 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-b4b8-account-create-update-xdc22"] Nov 27 17:40:24 crc kubenswrapper[4792]: I1127 17:40:24.709330 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1907ef-3b87-4899-a1f7-75096bb1b94e" path="/var/lib/kubelet/pods/de1907ef-3b87-4899-a1f7-75096bb1b94e/volumes" Nov 27 17:40:25 crc kubenswrapper[4792]: I1127 17:40:25.028600 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-j9j8r"] Nov 27 17:40:25 crc kubenswrapper[4792]: I1127 17:40:25.045978 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3ac4-account-create-update-46n8p"] Nov 27 17:40:25 crc kubenswrapper[4792]: I1127 17:40:25.056301 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3ac4-account-create-update-46n8p"] Nov 27 17:40:25 crc kubenswrapper[4792]: I1127 17:40:25.068174 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-j9j8r"] Nov 27 17:40:26 crc kubenswrapper[4792]: I1127 17:40:26.041299 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-r24r2"] Nov 27 17:40:26 crc kubenswrapper[4792]: I1127 17:40:26.053834 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6878-account-create-update-w2x65"] Nov 27 17:40:26 crc kubenswrapper[4792]: I1127 17:40:26.065136 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8b76-account-create-update-wb5pq"] Nov 27 17:40:26 crc kubenswrapper[4792]: I1127 17:40:26.075452 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-r24r2"] Nov 27 17:40:26 crc kubenswrapper[4792]: I1127 17:40:26.087394 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-csvr8"] Nov 27 17:40:26 crc kubenswrapper[4792]: I1127 17:40:26.098056 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8b76-account-create-update-wb5pq"] Nov 27 17:40:26 crc kubenswrapper[4792]: I1127 17:40:26.107758 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-csvr8"] Nov 27 17:40:26 crc kubenswrapper[4792]: I1127 17:40:26.117425 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6878-account-create-update-w2x65"] Nov 27 17:40:26 crc kubenswrapper[4792]: I1127 17:40:26.706299 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bda37ed-63e8-4cdc-98c9-50025ead6629" path="/var/lib/kubelet/pods/2bda37ed-63e8-4cdc-98c9-50025ead6629/volumes" Nov 27 17:40:26 crc kubenswrapper[4792]: I1127 17:40:26.709242 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5603bece-14a8-4764-8a38-000aa3ea0199" path="/var/lib/kubelet/pods/5603bece-14a8-4764-8a38-000aa3ea0199/volumes" Nov 27 17:40:26 crc kubenswrapper[4792]: I1127 17:40:26.711058 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c49a20e-fc3b-4a3a-8a88-a300599bc1d6" path="/var/lib/kubelet/pods/8c49a20e-fc3b-4a3a-8a88-a300599bc1d6/volumes" Nov 27 17:40:26 crc kubenswrapper[4792]: I1127 17:40:26.711959 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90c5f747-a058-4b9d-bc56-54a80f7a980c" path="/var/lib/kubelet/pods/90c5f747-a058-4b9d-bc56-54a80f7a980c/volumes" Nov 27 17:40:26 crc kubenswrapper[4792]: I1127 17:40:26.714305 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6903227-0287-490a-b643-b9eecab193dd" path="/var/lib/kubelet/pods/c6903227-0287-490a-b643-b9eecab193dd/volumes" Nov 27 17:40:26 crc kubenswrapper[4792]: I1127 17:40:26.715155 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0033423-6c17-45fe-8356-e655c26af92b" path="/var/lib/kubelet/pods/f0033423-6c17-45fe-8356-e655c26af92b/volumes" Nov 27 17:40:54 crc kubenswrapper[4792]: I1127 17:40:54.057802 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-8wxff"] Nov 27 17:40:54 crc kubenswrapper[4792]: I1127 17:40:54.067819 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-e32d-account-create-update-cnfgl"] Nov 27 17:40:54 crc kubenswrapper[4792]: I1127 17:40:54.079524 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-e32d-account-create-update-cnfgl"] Nov 27 17:40:54 crc kubenswrapper[4792]: I1127 17:40:54.090543 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-8wxff"] Nov 27 17:40:54 crc kubenswrapper[4792]: I1127 17:40:54.710942 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fc1c911-18ed-47e3-9028-e3accc0567fe" path="/var/lib/kubelet/pods/6fc1c911-18ed-47e3-9028-e3accc0567fe/volumes" Nov 27 17:40:54 crc kubenswrapper[4792]: I1127 17:40:54.712149 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa1189f-a300-44d7-9a8b-008bc13f13f7" path="/var/lib/kubelet/pods/baa1189f-a300-44d7-9a8b-008bc13f13f7/volumes" Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.036711 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-hztnj"] Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.056655 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-545fj"] Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.072596 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-hztnj"] Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.085530 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-e836-account-create-update-tshc7"] Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.098068 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-cg4t6"] Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.109071 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0b23-account-create-update-q8c7m"] Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.122824 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-545fj"] Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.135584 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0b23-account-create-update-q8c7m"] Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.147724 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-e836-account-create-update-tshc7"] Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.158002 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-cg4t6"] Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.168589 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-5m4nt"] Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.179959 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-5m4nt"] Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.190212 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-c035-account-create-update-tfx4j"] Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.200205 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-c035-account-create-update-tfx4j"] Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.701340 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3492cbfb-1b2a-4a4b-bad4-d577328a4fcd" path="/var/lib/kubelet/pods/3492cbfb-1b2a-4a4b-bad4-d577328a4fcd/volumes" Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.703777 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65732d4f-b7b6-44fd-a200-22f9a70de277" path="/var/lib/kubelet/pods/65732d4f-b7b6-44fd-a200-22f9a70de277/volumes" Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.704985 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e13bff-170f-4963-856b-406322ac0ab4" path="/var/lib/kubelet/pods/79e13bff-170f-4963-856b-406322ac0ab4/volumes" Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.705872 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3" path="/var/lib/kubelet/pods/8b243cfb-4dc0-4b6f-b0bb-f4fe72c8b8c3/volumes" Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.706784 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b04ada6c-7744-4237-8361-9c5cccad61b3" path="/var/lib/kubelet/pods/b04ada6c-7744-4237-8361-9c5cccad61b3/volumes" Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.708228 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba88d25c-da6d-421d-bfe5-cee0ea2d9b96" path="/var/lib/kubelet/pods/ba88d25c-da6d-421d-bfe5-cee0ea2d9b96/volumes" Nov 27 17:40:58 crc kubenswrapper[4792]: I1127 17:40:58.708881 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ced818ea-81fd-421d-b134-308b13cc2076" path="/var/lib/kubelet/pods/ced818ea-81fd-421d-b134-308b13cc2076/volumes" Nov 27 17:41:01 crc kubenswrapper[4792]: I1127 17:41:01.731220 4792 scope.go:117] "RemoveContainer" containerID="ac9fb9e5908311fe5a378f2c7d163c8bb3114982c12e158c33c1507e8ca479fd" Nov 27 17:41:01 crc kubenswrapper[4792]: I1127 17:41:01.765937 4792 scope.go:117] "RemoveContainer" containerID="0cb898e66a9f1f469f7f18418ed7b97aa43788a9742d2a60ea0870c786fd62db" Nov 27 17:41:01 crc kubenswrapper[4792]: I1127 17:41:01.817708 4792 scope.go:117] "RemoveContainer" containerID="e8b47678d5b1e188349bc1cc86a92649caa5707ffbee33812c3aef91ea901908" Nov 27 17:41:01 crc kubenswrapper[4792]: I1127 17:41:01.875694 4792 scope.go:117] "RemoveContainer" containerID="03aa41992e6a52bc497dbd00daf24b7207765571dc9f9a0d6535247263d72817" Nov 27 17:41:01 crc kubenswrapper[4792]: I1127 17:41:01.938981 4792 scope.go:117] "RemoveContainer" containerID="8eed29aeeab63d9ffe91006391cd3a0403cceb48782f6738e826eeeb51c0f3a4" Nov 27 17:41:01 crc kubenswrapper[4792]: I1127 17:41:01.995397 4792 scope.go:117] "RemoveContainer" containerID="ad22d44a6d8beceebcfcb49ddb6cd0c68d4477b691402937500ce834132dab60" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.032448 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-pdqxz"] Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.049066 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-pdqxz"] Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.056993 4792 scope.go:117] "RemoveContainer" containerID="3c10aa265ef8396c4e48cb4df30d2f4dc6d22f609b5c501b8cbe352be9772642" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.078370 4792 scope.go:117] "RemoveContainer" containerID="882f58d03fab635ede16b549ccc6f16e4e525f8e15d5647cda2132f051cac36e" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.096627 4792 scope.go:117] "RemoveContainer" containerID="74553bd5855243d6478552fb25f6d54df05f0c1556a0ae51f5ba8914e093aee1" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.154110 4792 scope.go:117] "RemoveContainer" containerID="5c536dda591376999072eaf5f89bb031397384bcad5a932a7775be4aafc7d367" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.174792 4792 scope.go:117] "RemoveContainer" containerID="052abeaf02e2566f5fd89cf551bd2e18f28e5418e90fe261a84bf97c763717d3" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.196388 4792 scope.go:117] "RemoveContainer" containerID="293e7cf97954891e39541ec865cf261cb95c36675cf85bbaad3c609f93cf323e" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.214769 4792 scope.go:117] "RemoveContainer" containerID="5aaad17c3dbf3bb931fa3c3ca7e72c3e096756eb0874d2cc6ce445e1c617a0a2" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.244871 4792 scope.go:117] "RemoveContainer" containerID="f8d2b862fda9f57e944f5d80c95cb15e591970ebca80074e98bea07822a6f2b4" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.265072 4792 scope.go:117] "RemoveContainer" containerID="6023424e6aa4f78400771f16622a3e528b01d0e970d750fc8b8860502279190a" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.285686 4792 scope.go:117] "RemoveContainer" containerID="441ab28921b66a6e552cad9b6a59176d4e86140e97996262282f2955130d87a9" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.305227 4792 scope.go:117] "RemoveContainer" containerID="ffa4434242fc4093842308e5cd53fa45f6b558607983153cafa04d111eb30606" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.326343 4792 scope.go:117] "RemoveContainer" containerID="388c2e2bc52bf1525ecaeff0f766459b888ab06136a2db1634c0f40fddd3523e" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.347760 4792 scope.go:117] "RemoveContainer" containerID="ad81c6af957b70eb152622420787e4803bbdb61cf76332303d9885cd96318311" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.370601 4792 scope.go:117] "RemoveContainer" containerID="db15b93b93b449a99385ceaf30549758c8a99ce857c94da356a8f1ea058b6e07" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.396462 4792 scope.go:117] "RemoveContainer" containerID="f71a0ea8d82fcad94cf63e49123b6ac0d609379affa87069e10f839f98a04b48" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.423889 4792 scope.go:117] "RemoveContainer" containerID="5db4d07c1cda5ba33a1263a82a2d3bd02028a8939cb0caeb7c1c50ec4d89817f" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.451839 4792 scope.go:117] "RemoveContainer" containerID="1ef7a1e5844ab1f9445bbb0c16c2610237ada53ba51bb79275db18f2b26c1e6b" Nov 27 17:41:02 crc kubenswrapper[4792]: I1127 17:41:02.699835 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb873a37-184a-4086-b6c6-3164fa76cbce" path="/var/lib/kubelet/pods/fb873a37-184a-4086-b6c6-3164fa76cbce/volumes" Nov 27 17:41:33 crc kubenswrapper[4792]: I1127 17:41:33.055473 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-nnctq"] Nov 27 17:41:33 crc kubenswrapper[4792]: I1127 17:41:33.071855 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-nnctq"] Nov 27 17:41:34 crc kubenswrapper[4792]: I1127 17:41:34.703065 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bacfd25c-6929-437c-887b-02b5b3f33b1e" path="/var/lib/kubelet/pods/bacfd25c-6929-437c-887b-02b5b3f33b1e/volumes" Nov 27 17:41:44 crc kubenswrapper[4792]: I1127 17:41:44.045051 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-658m9"] Nov 27 17:41:44 crc kubenswrapper[4792]: I1127 17:41:44.060997 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-658m9"] Nov 27 17:41:44 crc kubenswrapper[4792]: I1127 17:41:44.700257 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="821eeff5-b48a-4380-986e-9a9f3bb929eb" path="/var/lib/kubelet/pods/821eeff5-b48a-4380-986e-9a9f3bb929eb/volumes" Nov 27 17:41:46 crc kubenswrapper[4792]: I1127 17:41:46.037180 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4xhxm"] Nov 27 17:41:46 crc kubenswrapper[4792]: I1127 17:41:46.051782 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4xhxm"] Nov 27 17:41:46 crc kubenswrapper[4792]: I1127 17:41:46.068725 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xbj9c"] Nov 27 17:41:46 crc kubenswrapper[4792]: I1127 17:41:46.079208 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xbj9c"] Nov 27 17:41:46 crc kubenswrapper[4792]: I1127 17:41:46.707345 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="477a2f0e-b663-45b8-8541-f4d93f420304" path="/var/lib/kubelet/pods/477a2f0e-b663-45b8-8541-f4d93f420304/volumes" Nov 27 17:41:46 crc kubenswrapper[4792]: I1127 17:41:46.708279 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b22e78cf-a700-4a68-8b79-fdb0dc988a04" path="/var/lib/kubelet/pods/b22e78cf-a700-4a68-8b79-fdb0dc988a04/volumes" Nov 27 17:41:58 crc kubenswrapper[4792]: I1127 17:41:58.043412 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-vcdv9"] Nov 27 17:41:58 crc kubenswrapper[4792]: I1127 17:41:58.053930 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-vcdv9"] Nov 27 17:41:58 crc kubenswrapper[4792]: I1127 17:41:58.704483 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f1c2409-1610-4ede-ab33-880b170c802f" path="/var/lib/kubelet/pods/2f1c2409-1610-4ede-ab33-880b170c802f/volumes" Nov 27 17:42:02 crc kubenswrapper[4792]: I1127 17:42:02.901920 4792 scope.go:117] "RemoveContainer" containerID="eb1bed6ce26871f6ff33f38050f33b8dc0a72ec45befbead0765b55690bb999e" Nov 27 17:42:02 crc kubenswrapper[4792]: I1127 17:42:02.931690 4792 scope.go:117] "RemoveContainer" containerID="bdb7e5400f0d3628a2149c8d4d991b96cd85b6980fc6794084a1d6301e9320be" Nov 27 17:42:03 crc kubenswrapper[4792]: I1127 17:42:03.000708 4792 scope.go:117] "RemoveContainer" containerID="701477e73b8a7e892e10f998ffe3b15379564860acf8085515f92229be2fb539" Nov 27 17:42:03 crc kubenswrapper[4792]: I1127 17:42:03.054164 4792 scope.go:117] "RemoveContainer" containerID="be55da7b0d2e853f86ec64d21d10ff1467c329f2506bba53fbfc525ec773902f" Nov 27 17:42:03 crc kubenswrapper[4792]: I1127 17:42:03.124129 4792 scope.go:117] "RemoveContainer" containerID="ffc4651c0ee50b61bc6b7e81cb27476ff6cbccd8a03264d28e8e4b6f1db7a23a" Nov 27 17:42:03 crc kubenswrapper[4792]: I1127 17:42:03.179023 4792 scope.go:117] "RemoveContainer" containerID="c722c74221b9264a985633f99d530fadbf5f4267f6a2a18c515ed968d61c7a09" Nov 27 17:42:09 crc kubenswrapper[4792]: I1127 17:42:09.887061 4792 generic.go:334] "Generic (PLEG): container finished" podID="2b8542bf-b789-4e1a-9ff9-5375dc57cc94" containerID="4e5de50d4bb16b67d03503ca9cbdf676fa6a1a37b3d9c27bc9f3c4adbceee36c" exitCode=0 Nov 27 17:42:09 crc kubenswrapper[4792]: I1127 17:42:09.887136 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" event={"ID":"2b8542bf-b789-4e1a-9ff9-5375dc57cc94","Type":"ContainerDied","Data":"4e5de50d4bb16b67d03503ca9cbdf676fa6a1a37b3d9c27bc9f3c4adbceee36c"} Nov 27 17:42:11 crc kubenswrapper[4792]: I1127 17:42:11.380072 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" Nov 27 17:42:11 crc kubenswrapper[4792]: I1127 17:42:11.488596 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-inventory\") pod \"2b8542bf-b789-4e1a-9ff9-5375dc57cc94\" (UID: \"2b8542bf-b789-4e1a-9ff9-5375dc57cc94\") " Nov 27 17:42:11 crc kubenswrapper[4792]: I1127 17:42:11.488992 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-ssh-key\") pod \"2b8542bf-b789-4e1a-9ff9-5375dc57cc94\" (UID: \"2b8542bf-b789-4e1a-9ff9-5375dc57cc94\") " Nov 27 17:42:11 crc kubenswrapper[4792]: I1127 17:42:11.489033 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7x79\" (UniqueName: \"kubernetes.io/projected/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-kube-api-access-s7x79\") pod \"2b8542bf-b789-4e1a-9ff9-5375dc57cc94\" (UID: \"2b8542bf-b789-4e1a-9ff9-5375dc57cc94\") " Nov 27 17:42:11 crc kubenswrapper[4792]: I1127 17:42:11.518844 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-kube-api-access-s7x79" (OuterVolumeSpecName: "kube-api-access-s7x79") pod "2b8542bf-b789-4e1a-9ff9-5375dc57cc94" (UID: "2b8542bf-b789-4e1a-9ff9-5375dc57cc94"). InnerVolumeSpecName "kube-api-access-s7x79". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:42:11 crc kubenswrapper[4792]: I1127 17:42:11.557906 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2b8542bf-b789-4e1a-9ff9-5375dc57cc94" (UID: "2b8542bf-b789-4e1a-9ff9-5375dc57cc94"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:42:11 crc kubenswrapper[4792]: I1127 17:42:11.592040 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:42:11 crc kubenswrapper[4792]: I1127 17:42:11.592080 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7x79\" (UniqueName: \"kubernetes.io/projected/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-kube-api-access-s7x79\") on node \"crc\" DevicePath \"\"" Nov 27 17:42:11 crc kubenswrapper[4792]: I1127 17:42:11.601624 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-inventory" (OuterVolumeSpecName: "inventory") pod "2b8542bf-b789-4e1a-9ff9-5375dc57cc94" (UID: "2b8542bf-b789-4e1a-9ff9-5375dc57cc94"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:42:11 crc kubenswrapper[4792]: I1127 17:42:11.693951 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b8542bf-b789-4e1a-9ff9-5375dc57cc94-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:42:11 crc kubenswrapper[4792]: I1127 17:42:11.941307 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" event={"ID":"2b8542bf-b789-4e1a-9ff9-5375dc57cc94","Type":"ContainerDied","Data":"8d8fd167085ff54ecab4adad7a8964cbcad1f78f4520955f8005d1bae0057193"} Nov 27 17:42:11 crc kubenswrapper[4792]: I1127 17:42:11.941385 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d8fd167085ff54ecab4adad7a8964cbcad1f78f4520955f8005d1bae0057193" Nov 27 17:42:11 crc kubenswrapper[4792]: I1127 17:42:11.941413 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-87lxc" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.032127 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq"] Nov 27 17:42:12 crc kubenswrapper[4792]: E1127 17:42:12.032731 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8542bf-b789-4e1a-9ff9-5375dc57cc94" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.032752 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8542bf-b789-4e1a-9ff9-5375dc57cc94" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.033102 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b8542bf-b789-4e1a-9ff9-5375dc57cc94" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.034015 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.043584 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.043973 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.044285 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.044544 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.044876 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq"] Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.103032 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/941f3fd2-382e-4dc2-94f4-39df69607cee-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq\" (UID: \"941f3fd2-382e-4dc2-94f4-39df69607cee\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.103076 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/941f3fd2-382e-4dc2-94f4-39df69607cee-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq\" (UID: \"941f3fd2-382e-4dc2-94f4-39df69607cee\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.103207 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n9w2\" (UniqueName: \"kubernetes.io/projected/941f3fd2-382e-4dc2-94f4-39df69607cee-kube-api-access-4n9w2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq\" (UID: \"941f3fd2-382e-4dc2-94f4-39df69607cee\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.206221 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n9w2\" (UniqueName: \"kubernetes.io/projected/941f3fd2-382e-4dc2-94f4-39df69607cee-kube-api-access-4n9w2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq\" (UID: \"941f3fd2-382e-4dc2-94f4-39df69607cee\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.206388 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/941f3fd2-382e-4dc2-94f4-39df69607cee-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq\" (UID: \"941f3fd2-382e-4dc2-94f4-39df69607cee\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.206429 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/941f3fd2-382e-4dc2-94f4-39df69607cee-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq\" (UID: \"941f3fd2-382e-4dc2-94f4-39df69607cee\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.211062 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/941f3fd2-382e-4dc2-94f4-39df69607cee-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq\" (UID: \"941f3fd2-382e-4dc2-94f4-39df69607cee\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.211430 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/941f3fd2-382e-4dc2-94f4-39df69607cee-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq\" (UID: \"941f3fd2-382e-4dc2-94f4-39df69607cee\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.225449 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n9w2\" (UniqueName: \"kubernetes.io/projected/941f3fd2-382e-4dc2-94f4-39df69607cee-kube-api-access-4n9w2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq\" (UID: \"941f3fd2-382e-4dc2-94f4-39df69607cee\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.364779 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" Nov 27 17:42:12 crc kubenswrapper[4792]: I1127 17:42:12.967548 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq"] Nov 27 17:42:12 crc kubenswrapper[4792]: W1127 17:42:12.971896 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod941f3fd2_382e_4dc2_94f4_39df69607cee.slice/crio-c3536c9e7678a2ddfada2827eab0d0eb895e516c8e51040133aa82163ac4a34d WatchSource:0}: Error finding container c3536c9e7678a2ddfada2827eab0d0eb895e516c8e51040133aa82163ac4a34d: Status 404 returned error can't find the container with id c3536c9e7678a2ddfada2827eab0d0eb895e516c8e51040133aa82163ac4a34d Nov 27 17:42:13 crc kubenswrapper[4792]: I1127 17:42:13.964023 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" event={"ID":"941f3fd2-382e-4dc2-94f4-39df69607cee","Type":"ContainerStarted","Data":"c3536c9e7678a2ddfada2827eab0d0eb895e516c8e51040133aa82163ac4a34d"} Nov 27 17:42:14 crc kubenswrapper[4792]: I1127 17:42:14.973471 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" event={"ID":"941f3fd2-382e-4dc2-94f4-39df69607cee","Type":"ContainerStarted","Data":"42e5104be93b50ecb01b00123356797cd750ed60cc10fae5d8528a9cd5d916bd"} Nov 27 17:42:14 crc kubenswrapper[4792]: I1127 17:42:14.999478 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" podStartSLOduration=2.107311178 podStartE2EDuration="2.999460748s" podCreationTimestamp="2025-11-27 17:42:12 +0000 UTC" firstStartedPulling="2025-11-27 17:42:12.973817676 +0000 UTC m=+1955.316644004" lastFinishedPulling="2025-11-27 17:42:13.865967246 +0000 UTC m=+1956.208793574" observedRunningTime="2025-11-27 17:42:14.989564823 +0000 UTC m=+1957.332391141" watchObservedRunningTime="2025-11-27 17:42:14.999460748 +0000 UTC m=+1957.342287066" Nov 27 17:42:38 crc kubenswrapper[4792]: I1127 17:42:38.290448 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:42:38 crc kubenswrapper[4792]: I1127 17:42:38.291116 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.057135 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-t282f"] Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.073318 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-fgbxf"] Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.087100 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f4e9-account-create-update-4td2w"] Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.098925 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8647-account-create-update-njdbf"] Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.108833 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mc9d5"] Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.119016 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0ac2-account-create-update-wsmcm"] Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.130403 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-t282f"] Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.140966 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8647-account-create-update-njdbf"] Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.151857 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mc9d5"] Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.162370 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-fgbxf"] Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.172971 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f4e9-account-create-update-4td2w"] Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.183486 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0ac2-account-create-update-wsmcm"] Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.702850 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d93a918-1e28-4147-a6f4-fdf1572c40c8" path="/var/lib/kubelet/pods/3d93a918-1e28-4147-a6f4-fdf1572c40c8/volumes" Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.704335 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f1287d1-aab3-4631-9aa7-f208aedcf915" path="/var/lib/kubelet/pods/7f1287d1-aab3-4631-9aa7-f208aedcf915/volumes" Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.705701 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb12910-da22-4f2b-85ba-31ea98c5ee73" path="/var/lib/kubelet/pods/8bb12910-da22-4f2b-85ba-31ea98c5ee73/volumes" Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.707037 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a6e9c50-f83f-4b2c-975d-38f300d84169" path="/var/lib/kubelet/pods/9a6e9c50-f83f-4b2c-975d-38f300d84169/volumes" Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.709315 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b57df2fa-2e89-48d4-86ae-cd332706de3f" path="/var/lib/kubelet/pods/b57df2fa-2e89-48d4-86ae-cd332706de3f/volumes" Nov 27 17:42:40 crc kubenswrapper[4792]: I1127 17:42:40.710592 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb418a81-4c30-47e4-8f2c-8ce1d96cbed9" path="/var/lib/kubelet/pods/bb418a81-4c30-47e4-8f2c-8ce1d96cbed9/volumes" Nov 27 17:42:42 crc kubenswrapper[4792]: I1127 17:42:42.347483 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hbfqw"] Nov 27 17:42:42 crc kubenswrapper[4792]: I1127 17:42:42.351988 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:42:42 crc kubenswrapper[4792]: I1127 17:42:42.365204 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hbfqw"] Nov 27 17:42:42 crc kubenswrapper[4792]: I1127 17:42:42.408068 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqmc6\" (UniqueName: \"kubernetes.io/projected/6767091a-7e57-427b-8259-ec667bcdccea-kube-api-access-jqmc6\") pod \"community-operators-hbfqw\" (UID: \"6767091a-7e57-427b-8259-ec667bcdccea\") " pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:42:42 crc kubenswrapper[4792]: I1127 17:42:42.408364 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6767091a-7e57-427b-8259-ec667bcdccea-utilities\") pod \"community-operators-hbfqw\" (UID: \"6767091a-7e57-427b-8259-ec667bcdccea\") " pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:42:42 crc kubenswrapper[4792]: I1127 17:42:42.408485 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6767091a-7e57-427b-8259-ec667bcdccea-catalog-content\") pod \"community-operators-hbfqw\" (UID: \"6767091a-7e57-427b-8259-ec667bcdccea\") " pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:42:42 crc kubenswrapper[4792]: I1127 17:42:42.511440 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6767091a-7e57-427b-8259-ec667bcdccea-catalog-content\") pod \"community-operators-hbfqw\" (UID: \"6767091a-7e57-427b-8259-ec667bcdccea\") " pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:42:42 crc kubenswrapper[4792]: I1127 17:42:42.511860 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqmc6\" (UniqueName: \"kubernetes.io/projected/6767091a-7e57-427b-8259-ec667bcdccea-kube-api-access-jqmc6\") pod \"community-operators-hbfqw\" (UID: \"6767091a-7e57-427b-8259-ec667bcdccea\") " pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:42:42 crc kubenswrapper[4792]: I1127 17:42:42.512074 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6767091a-7e57-427b-8259-ec667bcdccea-catalog-content\") pod \"community-operators-hbfqw\" (UID: \"6767091a-7e57-427b-8259-ec667bcdccea\") " pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:42:42 crc kubenswrapper[4792]: I1127 17:42:42.512252 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6767091a-7e57-427b-8259-ec667bcdccea-utilities\") pod \"community-operators-hbfqw\" (UID: \"6767091a-7e57-427b-8259-ec667bcdccea\") " pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:42:42 crc kubenswrapper[4792]: I1127 17:42:42.512578 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6767091a-7e57-427b-8259-ec667bcdccea-utilities\") pod \"community-operators-hbfqw\" (UID: \"6767091a-7e57-427b-8259-ec667bcdccea\") " pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:42:42 crc kubenswrapper[4792]: I1127 17:42:42.541545 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqmc6\" (UniqueName: \"kubernetes.io/projected/6767091a-7e57-427b-8259-ec667bcdccea-kube-api-access-jqmc6\") pod \"community-operators-hbfqw\" (UID: \"6767091a-7e57-427b-8259-ec667bcdccea\") " pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:42:42 crc kubenswrapper[4792]: I1127 17:42:42.672398 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:42:43 crc kubenswrapper[4792]: I1127 17:42:43.374163 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hbfqw"] Nov 27 17:42:43 crc kubenswrapper[4792]: W1127 17:42:43.380737 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6767091a_7e57_427b_8259_ec667bcdccea.slice/crio-5688b355dab36cfdaef4fafe769a25fdf8b1b82dc87e40c5a39e7d2b5f20301f WatchSource:0}: Error finding container 5688b355dab36cfdaef4fafe769a25fdf8b1b82dc87e40c5a39e7d2b5f20301f: Status 404 returned error can't find the container with id 5688b355dab36cfdaef4fafe769a25fdf8b1b82dc87e40c5a39e7d2b5f20301f Nov 27 17:42:44 crc kubenswrapper[4792]: I1127 17:42:44.339186 4792 generic.go:334] "Generic (PLEG): container finished" podID="6767091a-7e57-427b-8259-ec667bcdccea" containerID="edf3143be6fd309905346ded2cbcc1888adbff597139e1c683ed75205b5c82db" exitCode=0 Nov 27 17:42:44 crc kubenswrapper[4792]: I1127 17:42:44.339238 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbfqw" event={"ID":"6767091a-7e57-427b-8259-ec667bcdccea","Type":"ContainerDied","Data":"edf3143be6fd309905346ded2cbcc1888adbff597139e1c683ed75205b5c82db"} Nov 27 17:42:44 crc kubenswrapper[4792]: I1127 17:42:44.339484 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbfqw" event={"ID":"6767091a-7e57-427b-8259-ec667bcdccea","Type":"ContainerStarted","Data":"5688b355dab36cfdaef4fafe769a25fdf8b1b82dc87e40c5a39e7d2b5f20301f"} Nov 27 17:42:44 crc kubenswrapper[4792]: I1127 17:42:44.947620 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xmr4h"] Nov 27 17:42:44 crc kubenswrapper[4792]: I1127 17:42:44.952117 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:42:44 crc kubenswrapper[4792]: I1127 17:42:44.963519 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmr4h"] Nov 27 17:42:45 crc kubenswrapper[4792]: I1127 17:42:45.119807 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deca9f23-96a3-4563-aca4-fe32e4303269-catalog-content\") pod \"certified-operators-xmr4h\" (UID: \"deca9f23-96a3-4563-aca4-fe32e4303269\") " pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:42:45 crc kubenswrapper[4792]: I1127 17:42:45.120458 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deca9f23-96a3-4563-aca4-fe32e4303269-utilities\") pod \"certified-operators-xmr4h\" (UID: \"deca9f23-96a3-4563-aca4-fe32e4303269\") " pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:42:45 crc kubenswrapper[4792]: I1127 17:42:45.120520 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2lqf\" (UniqueName: \"kubernetes.io/projected/deca9f23-96a3-4563-aca4-fe32e4303269-kube-api-access-q2lqf\") pod \"certified-operators-xmr4h\" (UID: \"deca9f23-96a3-4563-aca4-fe32e4303269\") " pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:42:45 crc kubenswrapper[4792]: I1127 17:42:45.222990 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deca9f23-96a3-4563-aca4-fe32e4303269-utilities\") pod \"certified-operators-xmr4h\" (UID: \"deca9f23-96a3-4563-aca4-fe32e4303269\") " pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:42:45 crc kubenswrapper[4792]: I1127 17:42:45.223038 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2lqf\" (UniqueName: \"kubernetes.io/projected/deca9f23-96a3-4563-aca4-fe32e4303269-kube-api-access-q2lqf\") pod \"certified-operators-xmr4h\" (UID: \"deca9f23-96a3-4563-aca4-fe32e4303269\") " pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:42:45 crc kubenswrapper[4792]: I1127 17:42:45.223092 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deca9f23-96a3-4563-aca4-fe32e4303269-catalog-content\") pod \"certified-operators-xmr4h\" (UID: \"deca9f23-96a3-4563-aca4-fe32e4303269\") " pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:42:45 crc kubenswrapper[4792]: I1127 17:42:45.223596 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deca9f23-96a3-4563-aca4-fe32e4303269-utilities\") pod \"certified-operators-xmr4h\" (UID: \"deca9f23-96a3-4563-aca4-fe32e4303269\") " pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:42:45 crc kubenswrapper[4792]: I1127 17:42:45.223621 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deca9f23-96a3-4563-aca4-fe32e4303269-catalog-content\") pod \"certified-operators-xmr4h\" (UID: \"deca9f23-96a3-4563-aca4-fe32e4303269\") " pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:42:45 crc kubenswrapper[4792]: I1127 17:42:45.249146 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2lqf\" (UniqueName: \"kubernetes.io/projected/deca9f23-96a3-4563-aca4-fe32e4303269-kube-api-access-q2lqf\") pod \"certified-operators-xmr4h\" (UID: \"deca9f23-96a3-4563-aca4-fe32e4303269\") " pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:42:45 crc kubenswrapper[4792]: I1127 17:42:45.280619 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:42:45 crc kubenswrapper[4792]: I1127 17:42:45.794232 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmr4h"] Nov 27 17:42:45 crc kubenswrapper[4792]: W1127 17:42:45.802861 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddeca9f23_96a3_4563_aca4_fe32e4303269.slice/crio-9ded8cae892de48ccf58fa521cf21e5d6b7b0cdc1252bac739b543c2c5f95108 WatchSource:0}: Error finding container 9ded8cae892de48ccf58fa521cf21e5d6b7b0cdc1252bac739b543c2c5f95108: Status 404 returned error can't find the container with id 9ded8cae892de48ccf58fa521cf21e5d6b7b0cdc1252bac739b543c2c5f95108 Nov 27 17:42:46 crc kubenswrapper[4792]: I1127 17:42:46.371704 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbfqw" event={"ID":"6767091a-7e57-427b-8259-ec667bcdccea","Type":"ContainerStarted","Data":"a66321efea31181e9aceb63c2b8fec073c0d499b0dff5b1c84ad2f3d56d9cb3e"} Nov 27 17:42:46 crc kubenswrapper[4792]: I1127 17:42:46.375454 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmr4h" event={"ID":"deca9f23-96a3-4563-aca4-fe32e4303269","Type":"ContainerDied","Data":"489de1426dff12df47d352a1ff8823a5ec9b06567fcb0c6453bcdfe4cf23226e"} Nov 27 17:42:46 crc kubenswrapper[4792]: I1127 17:42:46.375256 4792 generic.go:334] "Generic (PLEG): container finished" podID="deca9f23-96a3-4563-aca4-fe32e4303269" containerID="489de1426dff12df47d352a1ff8823a5ec9b06567fcb0c6453bcdfe4cf23226e" exitCode=0 Nov 27 17:42:46 crc kubenswrapper[4792]: I1127 17:42:46.376453 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmr4h" event={"ID":"deca9f23-96a3-4563-aca4-fe32e4303269","Type":"ContainerStarted","Data":"9ded8cae892de48ccf58fa521cf21e5d6b7b0cdc1252bac739b543c2c5f95108"} Nov 27 17:42:48 crc kubenswrapper[4792]: I1127 17:42:48.397801 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmr4h" event={"ID":"deca9f23-96a3-4563-aca4-fe32e4303269","Type":"ContainerStarted","Data":"29bdb4cdff11e748359d86f3218c4d03750de86db7b7aa8d57863d9863385b1b"} Nov 27 17:42:49 crc kubenswrapper[4792]: I1127 17:42:49.412898 4792 generic.go:334] "Generic (PLEG): container finished" podID="6767091a-7e57-427b-8259-ec667bcdccea" containerID="a66321efea31181e9aceb63c2b8fec073c0d499b0dff5b1c84ad2f3d56d9cb3e" exitCode=0 Nov 27 17:42:49 crc kubenswrapper[4792]: I1127 17:42:49.412973 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbfqw" event={"ID":"6767091a-7e57-427b-8259-ec667bcdccea","Type":"ContainerDied","Data":"a66321efea31181e9aceb63c2b8fec073c0d499b0dff5b1c84ad2f3d56d9cb3e"} Nov 27 17:42:51 crc kubenswrapper[4792]: I1127 17:42:51.439824 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbfqw" event={"ID":"6767091a-7e57-427b-8259-ec667bcdccea","Type":"ContainerStarted","Data":"29117b8300dd72e7a8d8151d47c15118dc9b05481382b25c7d8973e4af59e2a1"} Nov 27 17:42:51 crc kubenswrapper[4792]: I1127 17:42:51.443793 4792 generic.go:334] "Generic (PLEG): container finished" podID="deca9f23-96a3-4563-aca4-fe32e4303269" containerID="29bdb4cdff11e748359d86f3218c4d03750de86db7b7aa8d57863d9863385b1b" exitCode=0 Nov 27 17:42:51 crc kubenswrapper[4792]: I1127 17:42:51.443871 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmr4h" event={"ID":"deca9f23-96a3-4563-aca4-fe32e4303269","Type":"ContainerDied","Data":"29bdb4cdff11e748359d86f3218c4d03750de86db7b7aa8d57863d9863385b1b"} Nov 27 17:42:51 crc kubenswrapper[4792]: I1127 17:42:51.474302 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hbfqw" podStartSLOduration=3.345229692 podStartE2EDuration="9.474276735s" podCreationTimestamp="2025-11-27 17:42:42 +0000 UTC" firstStartedPulling="2025-11-27 17:42:44.341532607 +0000 UTC m=+1986.684358925" lastFinishedPulling="2025-11-27 17:42:50.47057965 +0000 UTC m=+1992.813405968" observedRunningTime="2025-11-27 17:42:51.46478407 +0000 UTC m=+1993.807610398" watchObservedRunningTime="2025-11-27 17:42:51.474276735 +0000 UTC m=+1993.817103053" Nov 27 17:42:52 crc kubenswrapper[4792]: I1127 17:42:52.458034 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmr4h" event={"ID":"deca9f23-96a3-4563-aca4-fe32e4303269","Type":"ContainerStarted","Data":"74e8af84d09b3d9d1d172788efce615b86f3cc12f0124d438b5dbfb79482a7ed"} Nov 27 17:42:52 crc kubenswrapper[4792]: I1127 17:42:52.481386 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xmr4h" podStartSLOduration=2.976596014 podStartE2EDuration="8.481366044s" podCreationTimestamp="2025-11-27 17:42:44 +0000 UTC" firstStartedPulling="2025-11-27 17:42:46.378064458 +0000 UTC m=+1988.720890776" lastFinishedPulling="2025-11-27 17:42:51.882834488 +0000 UTC m=+1994.225660806" observedRunningTime="2025-11-27 17:42:52.474834643 +0000 UTC m=+1994.817660981" watchObservedRunningTime="2025-11-27 17:42:52.481366044 +0000 UTC m=+1994.824192372" Nov 27 17:42:52 crc kubenswrapper[4792]: I1127 17:42:52.672676 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:42:52 crc kubenswrapper[4792]: I1127 17:42:52.672749 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:42:53 crc kubenswrapper[4792]: I1127 17:42:53.718292 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hbfqw" podUID="6767091a-7e57-427b-8259-ec667bcdccea" containerName="registry-server" probeResult="failure" output=< Nov 27 17:42:53 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 17:42:53 crc kubenswrapper[4792]: > Nov 27 17:42:55 crc kubenswrapper[4792]: I1127 17:42:55.281265 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:42:55 crc kubenswrapper[4792]: I1127 17:42:55.281315 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:42:56 crc kubenswrapper[4792]: I1127 17:42:56.340001 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xmr4h" podUID="deca9f23-96a3-4563-aca4-fe32e4303269" containerName="registry-server" probeResult="failure" output=< Nov 27 17:42:56 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 17:42:56 crc kubenswrapper[4792]: > Nov 27 17:43:03 crc kubenswrapper[4792]: I1127 17:43:03.394232 4792 scope.go:117] "RemoveContainer" containerID="fee903b72898cc2dd1e1a0cfac38363611e16b778b15e906689fc4c2c2020fd5" Nov 27 17:43:03 crc kubenswrapper[4792]: I1127 17:43:03.442897 4792 scope.go:117] "RemoveContainer" containerID="a0dca2840d61f83149dbfeaa0e3f16ccf530b80f5675edcf674161eacac706f0" Nov 27 17:43:03 crc kubenswrapper[4792]: I1127 17:43:03.523006 4792 scope.go:117] "RemoveContainer" containerID="b66aa979ce4708ecea7cfde05719c4e12c56e096e38aa1147571c78b485a9077" Nov 27 17:43:03 crc kubenswrapper[4792]: I1127 17:43:03.582798 4792 scope.go:117] "RemoveContainer" containerID="ab19081fce7ec06833f4e88cfa92673bfef58d9820f8c81f5280bc7283115f0e" Nov 27 17:43:03 crc kubenswrapper[4792]: I1127 17:43:03.656172 4792 scope.go:117] "RemoveContainer" containerID="ecbedbd0813b3998d2a87baf3f7f1f4e7cf69884771ad95054b72a8c2205f852" Nov 27 17:43:03 crc kubenswrapper[4792]: I1127 17:43:03.776529 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hbfqw" podUID="6767091a-7e57-427b-8259-ec667bcdccea" containerName="registry-server" probeResult="failure" output=< Nov 27 17:43:03 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 17:43:03 crc kubenswrapper[4792]: > Nov 27 17:43:03 crc kubenswrapper[4792]: I1127 17:43:03.809871 4792 scope.go:117] "RemoveContainer" containerID="d93942c6b2a9b644992c9f40071802dae48fcdf1065c2c531db3e6e85667fad2" Nov 27 17:43:05 crc kubenswrapper[4792]: I1127 17:43:05.368197 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:43:05 crc kubenswrapper[4792]: I1127 17:43:05.440603 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:43:05 crc kubenswrapper[4792]: I1127 17:43:05.615845 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xmr4h"] Nov 27 17:43:06 crc kubenswrapper[4792]: I1127 17:43:06.682196 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xmr4h" podUID="deca9f23-96a3-4563-aca4-fe32e4303269" containerName="registry-server" containerID="cri-o://74e8af84d09b3d9d1d172788efce615b86f3cc12f0124d438b5dbfb79482a7ed" gracePeriod=2 Nov 27 17:43:07 crc kubenswrapper[4792]: I1127 17:43:07.693293 4792 generic.go:334] "Generic (PLEG): container finished" podID="deca9f23-96a3-4563-aca4-fe32e4303269" containerID="74e8af84d09b3d9d1d172788efce615b86f3cc12f0124d438b5dbfb79482a7ed" exitCode=0 Nov 27 17:43:07 crc kubenswrapper[4792]: I1127 17:43:07.693341 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmr4h" event={"ID":"deca9f23-96a3-4563-aca4-fe32e4303269","Type":"ContainerDied","Data":"74e8af84d09b3d9d1d172788efce615b86f3cc12f0124d438b5dbfb79482a7ed"} Nov 27 17:43:07 crc kubenswrapper[4792]: I1127 17:43:07.820827 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:43:07 crc kubenswrapper[4792]: I1127 17:43:07.849567 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deca9f23-96a3-4563-aca4-fe32e4303269-utilities\") pod \"deca9f23-96a3-4563-aca4-fe32e4303269\" (UID: \"deca9f23-96a3-4563-aca4-fe32e4303269\") " Nov 27 17:43:07 crc kubenswrapper[4792]: I1127 17:43:07.849738 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deca9f23-96a3-4563-aca4-fe32e4303269-catalog-content\") pod \"deca9f23-96a3-4563-aca4-fe32e4303269\" (UID: \"deca9f23-96a3-4563-aca4-fe32e4303269\") " Nov 27 17:43:07 crc kubenswrapper[4792]: I1127 17:43:07.849804 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2lqf\" (UniqueName: \"kubernetes.io/projected/deca9f23-96a3-4563-aca4-fe32e4303269-kube-api-access-q2lqf\") pod \"deca9f23-96a3-4563-aca4-fe32e4303269\" (UID: \"deca9f23-96a3-4563-aca4-fe32e4303269\") " Nov 27 17:43:07 crc kubenswrapper[4792]: I1127 17:43:07.851787 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deca9f23-96a3-4563-aca4-fe32e4303269-utilities" (OuterVolumeSpecName: "utilities") pod "deca9f23-96a3-4563-aca4-fe32e4303269" (UID: "deca9f23-96a3-4563-aca4-fe32e4303269"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:43:07 crc kubenswrapper[4792]: I1127 17:43:07.863890 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deca9f23-96a3-4563-aca4-fe32e4303269-kube-api-access-q2lqf" (OuterVolumeSpecName: "kube-api-access-q2lqf") pod "deca9f23-96a3-4563-aca4-fe32e4303269" (UID: "deca9f23-96a3-4563-aca4-fe32e4303269"). InnerVolumeSpecName "kube-api-access-q2lqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:43:07 crc kubenswrapper[4792]: I1127 17:43:07.911252 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deca9f23-96a3-4563-aca4-fe32e4303269-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "deca9f23-96a3-4563-aca4-fe32e4303269" (UID: "deca9f23-96a3-4563-aca4-fe32e4303269"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:43:07 crc kubenswrapper[4792]: I1127 17:43:07.953053 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deca9f23-96a3-4563-aca4-fe32e4303269-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:43:07 crc kubenswrapper[4792]: I1127 17:43:07.953091 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deca9f23-96a3-4563-aca4-fe32e4303269-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:43:07 crc kubenswrapper[4792]: I1127 17:43:07.953107 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2lqf\" (UniqueName: \"kubernetes.io/projected/deca9f23-96a3-4563-aca4-fe32e4303269-kube-api-access-q2lqf\") on node \"crc\" DevicePath \"\"" Nov 27 17:43:08 crc kubenswrapper[4792]: I1127 17:43:08.290066 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:43:08 crc kubenswrapper[4792]: I1127 17:43:08.290182 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:43:08 crc kubenswrapper[4792]: I1127 17:43:08.707770 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmr4h" event={"ID":"deca9f23-96a3-4563-aca4-fe32e4303269","Type":"ContainerDied","Data":"9ded8cae892de48ccf58fa521cf21e5d6b7b0cdc1252bac739b543c2c5f95108"} Nov 27 17:43:08 crc kubenswrapper[4792]: I1127 17:43:08.707858 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmr4h" Nov 27 17:43:08 crc kubenswrapper[4792]: I1127 17:43:08.708118 4792 scope.go:117] "RemoveContainer" containerID="74e8af84d09b3d9d1d172788efce615b86f3cc12f0124d438b5dbfb79482a7ed" Nov 27 17:43:08 crc kubenswrapper[4792]: I1127 17:43:08.736438 4792 scope.go:117] "RemoveContainer" containerID="29bdb4cdff11e748359d86f3218c4d03750de86db7b7aa8d57863d9863385b1b" Nov 27 17:43:08 crc kubenswrapper[4792]: I1127 17:43:08.811786 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xmr4h"] Nov 27 17:43:08 crc kubenswrapper[4792]: I1127 17:43:08.844739 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xmr4h"] Nov 27 17:43:08 crc kubenswrapper[4792]: I1127 17:43:08.853831 4792 scope.go:117] "RemoveContainer" containerID="489de1426dff12df47d352a1ff8823a5ec9b06567fcb0c6453bcdfe4cf23226e" Nov 27 17:43:10 crc kubenswrapper[4792]: I1127 17:43:10.707453 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deca9f23-96a3-4563-aca4-fe32e4303269" path="/var/lib/kubelet/pods/deca9f23-96a3-4563-aca4-fe32e4303269/volumes" Nov 27 17:43:12 crc kubenswrapper[4792]: I1127 17:43:12.756937 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:43:12 crc kubenswrapper[4792]: I1127 17:43:12.807844 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:43:13 crc kubenswrapper[4792]: I1127 17:43:13.547070 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hbfqw"] Nov 27 17:43:14 crc kubenswrapper[4792]: I1127 17:43:14.787192 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hbfqw" podUID="6767091a-7e57-427b-8259-ec667bcdccea" containerName="registry-server" containerID="cri-o://29117b8300dd72e7a8d8151d47c15118dc9b05481382b25c7d8973e4af59e2a1" gracePeriod=2 Nov 27 17:43:15 crc kubenswrapper[4792]: I1127 17:43:15.802973 4792 generic.go:334] "Generic (PLEG): container finished" podID="6767091a-7e57-427b-8259-ec667bcdccea" containerID="29117b8300dd72e7a8d8151d47c15118dc9b05481382b25c7d8973e4af59e2a1" exitCode=0 Nov 27 17:43:15 crc kubenswrapper[4792]: I1127 17:43:15.803042 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbfqw" event={"ID":"6767091a-7e57-427b-8259-ec667bcdccea","Type":"ContainerDied","Data":"29117b8300dd72e7a8d8151d47c15118dc9b05481382b25c7d8973e4af59e2a1"} Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.051075 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jxhlm"] Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.060976 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jxhlm"] Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.212203 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.281766 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6767091a-7e57-427b-8259-ec667bcdccea-utilities\") pod \"6767091a-7e57-427b-8259-ec667bcdccea\" (UID: \"6767091a-7e57-427b-8259-ec667bcdccea\") " Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.282074 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqmc6\" (UniqueName: \"kubernetes.io/projected/6767091a-7e57-427b-8259-ec667bcdccea-kube-api-access-jqmc6\") pod \"6767091a-7e57-427b-8259-ec667bcdccea\" (UID: \"6767091a-7e57-427b-8259-ec667bcdccea\") " Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.282190 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6767091a-7e57-427b-8259-ec667bcdccea-catalog-content\") pod \"6767091a-7e57-427b-8259-ec667bcdccea\" (UID: \"6767091a-7e57-427b-8259-ec667bcdccea\") " Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.282672 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6767091a-7e57-427b-8259-ec667bcdccea-utilities" (OuterVolumeSpecName: "utilities") pod "6767091a-7e57-427b-8259-ec667bcdccea" (UID: "6767091a-7e57-427b-8259-ec667bcdccea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.283129 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6767091a-7e57-427b-8259-ec667bcdccea-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.288212 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6767091a-7e57-427b-8259-ec667bcdccea-kube-api-access-jqmc6" (OuterVolumeSpecName: "kube-api-access-jqmc6") pod "6767091a-7e57-427b-8259-ec667bcdccea" (UID: "6767091a-7e57-427b-8259-ec667bcdccea"). InnerVolumeSpecName "kube-api-access-jqmc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.348410 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6767091a-7e57-427b-8259-ec667bcdccea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6767091a-7e57-427b-8259-ec667bcdccea" (UID: "6767091a-7e57-427b-8259-ec667bcdccea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.385873 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqmc6\" (UniqueName: \"kubernetes.io/projected/6767091a-7e57-427b-8259-ec667bcdccea-kube-api-access-jqmc6\") on node \"crc\" DevicePath \"\"" Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.386076 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6767091a-7e57-427b-8259-ec667bcdccea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.701042 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9" path="/var/lib/kubelet/pods/6ea5caf4-ffc9-4d5b-a5f6-82ae038d46b9/volumes" Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.818427 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hbfqw" event={"ID":"6767091a-7e57-427b-8259-ec667bcdccea","Type":"ContainerDied","Data":"5688b355dab36cfdaef4fafe769a25fdf8b1b82dc87e40c5a39e7d2b5f20301f"} Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.818476 4792 scope.go:117] "RemoveContainer" containerID="29117b8300dd72e7a8d8151d47c15118dc9b05481382b25c7d8973e4af59e2a1" Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.818474 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hbfqw" Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.849013 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hbfqw"] Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.855655 4792 scope.go:117] "RemoveContainer" containerID="a66321efea31181e9aceb63c2b8fec073c0d499b0dff5b1c84ad2f3d56d9cb3e" Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.860007 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hbfqw"] Nov 27 17:43:16 crc kubenswrapper[4792]: I1127 17:43:16.883060 4792 scope.go:117] "RemoveContainer" containerID="edf3143be6fd309905346ded2cbcc1888adbff597139e1c683ed75205b5c82db" Nov 27 17:43:18 crc kubenswrapper[4792]: I1127 17:43:18.721431 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6767091a-7e57-427b-8259-ec667bcdccea" path="/var/lib/kubelet/pods/6767091a-7e57-427b-8259-ec667bcdccea/volumes" Nov 27 17:43:28 crc kubenswrapper[4792]: I1127 17:43:28.970806 4792 generic.go:334] "Generic (PLEG): container finished" podID="941f3fd2-382e-4dc2-94f4-39df69607cee" containerID="42e5104be93b50ecb01b00123356797cd750ed60cc10fae5d8528a9cd5d916bd" exitCode=0 Nov 27 17:43:28 crc kubenswrapper[4792]: I1127 17:43:28.970931 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" event={"ID":"941f3fd2-382e-4dc2-94f4-39df69607cee","Type":"ContainerDied","Data":"42e5104be93b50ecb01b00123356797cd750ed60cc10fae5d8528a9cd5d916bd"} Nov 27 17:43:30 crc kubenswrapper[4792]: I1127 17:43:30.675908 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" Nov 27 17:43:30 crc kubenswrapper[4792]: I1127 17:43:30.778984 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/941f3fd2-382e-4dc2-94f4-39df69607cee-ssh-key\") pod \"941f3fd2-382e-4dc2-94f4-39df69607cee\" (UID: \"941f3fd2-382e-4dc2-94f4-39df69607cee\") " Nov 27 17:43:30 crc kubenswrapper[4792]: I1127 17:43:30.779037 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n9w2\" (UniqueName: \"kubernetes.io/projected/941f3fd2-382e-4dc2-94f4-39df69607cee-kube-api-access-4n9w2\") pod \"941f3fd2-382e-4dc2-94f4-39df69607cee\" (UID: \"941f3fd2-382e-4dc2-94f4-39df69607cee\") " Nov 27 17:43:30 crc kubenswrapper[4792]: I1127 17:43:30.779269 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/941f3fd2-382e-4dc2-94f4-39df69607cee-inventory\") pod \"941f3fd2-382e-4dc2-94f4-39df69607cee\" (UID: \"941f3fd2-382e-4dc2-94f4-39df69607cee\") " Nov 27 17:43:30 crc kubenswrapper[4792]: I1127 17:43:30.789890 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941f3fd2-382e-4dc2-94f4-39df69607cee-kube-api-access-4n9w2" (OuterVolumeSpecName: "kube-api-access-4n9w2") pod "941f3fd2-382e-4dc2-94f4-39df69607cee" (UID: "941f3fd2-382e-4dc2-94f4-39df69607cee"). InnerVolumeSpecName "kube-api-access-4n9w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:43:30 crc kubenswrapper[4792]: I1127 17:43:30.813401 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/941f3fd2-382e-4dc2-94f4-39df69607cee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "941f3fd2-382e-4dc2-94f4-39df69607cee" (UID: "941f3fd2-382e-4dc2-94f4-39df69607cee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:43:30 crc kubenswrapper[4792]: I1127 17:43:30.815591 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/941f3fd2-382e-4dc2-94f4-39df69607cee-inventory" (OuterVolumeSpecName: "inventory") pod "941f3fd2-382e-4dc2-94f4-39df69607cee" (UID: "941f3fd2-382e-4dc2-94f4-39df69607cee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:43:30 crc kubenswrapper[4792]: I1127 17:43:30.885751 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/941f3fd2-382e-4dc2-94f4-39df69607cee-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:43:30 crc kubenswrapper[4792]: I1127 17:43:30.885786 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/941f3fd2-382e-4dc2-94f4-39df69607cee-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:43:30 crc kubenswrapper[4792]: I1127 17:43:30.885804 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n9w2\" (UniqueName: \"kubernetes.io/projected/941f3fd2-382e-4dc2-94f4-39df69607cee-kube-api-access-4n9w2\") on node \"crc\" DevicePath \"\"" Nov 27 17:43:30 crc kubenswrapper[4792]: I1127 17:43:30.997564 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" event={"ID":"941f3fd2-382e-4dc2-94f4-39df69607cee","Type":"ContainerDied","Data":"c3536c9e7678a2ddfada2827eab0d0eb895e516c8e51040133aa82163ac4a34d"} Nov 27 17:43:30 crc kubenswrapper[4792]: I1127 17:43:30.997621 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3536c9e7678a2ddfada2827eab0d0eb895e516c8e51040133aa82163ac4a34d" Nov 27 17:43:30 crc kubenswrapper[4792]: I1127 17:43:30.997675 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.085989 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs"] Nov 27 17:43:31 crc kubenswrapper[4792]: E1127 17:43:31.087741 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deca9f23-96a3-4563-aca4-fe32e4303269" containerName="registry-server" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.087769 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="deca9f23-96a3-4563-aca4-fe32e4303269" containerName="registry-server" Nov 27 17:43:31 crc kubenswrapper[4792]: E1127 17:43:31.087786 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6767091a-7e57-427b-8259-ec667bcdccea" containerName="extract-utilities" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.087796 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6767091a-7e57-427b-8259-ec667bcdccea" containerName="extract-utilities" Nov 27 17:43:31 crc kubenswrapper[4792]: E1127 17:43:31.087826 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6767091a-7e57-427b-8259-ec667bcdccea" containerName="registry-server" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.087833 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6767091a-7e57-427b-8259-ec667bcdccea" containerName="registry-server" Nov 27 17:43:31 crc kubenswrapper[4792]: E1127 17:43:31.087850 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6767091a-7e57-427b-8259-ec667bcdccea" containerName="extract-content" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.087858 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6767091a-7e57-427b-8259-ec667bcdccea" containerName="extract-content" Nov 27 17:43:31 crc kubenswrapper[4792]: E1127 17:43:31.087871 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941f3fd2-382e-4dc2-94f4-39df69607cee" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.087881 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="941f3fd2-382e-4dc2-94f4-39df69607cee" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 27 17:43:31 crc kubenswrapper[4792]: E1127 17:43:31.087904 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deca9f23-96a3-4563-aca4-fe32e4303269" containerName="extract-utilities" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.087911 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="deca9f23-96a3-4563-aca4-fe32e4303269" containerName="extract-utilities" Nov 27 17:43:31 crc kubenswrapper[4792]: E1127 17:43:31.087933 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deca9f23-96a3-4563-aca4-fe32e4303269" containerName="extract-content" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.087940 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="deca9f23-96a3-4563-aca4-fe32e4303269" containerName="extract-content" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.088260 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="941f3fd2-382e-4dc2-94f4-39df69607cee" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.088291 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6767091a-7e57-427b-8259-ec667bcdccea" containerName="registry-server" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.088327 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="deca9f23-96a3-4563-aca4-fe32e4303269" containerName="registry-server" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.089506 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.092672 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.093035 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.093167 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.093205 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.102362 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs"] Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.190676 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9thzs\" (UID: \"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.190753 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpftt\" (UniqueName: \"kubernetes.io/projected/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-kube-api-access-zpftt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9thzs\" (UID: \"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.190807 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9thzs\" (UID: \"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.291827 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9thzs\" (UID: \"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.291875 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpftt\" (UniqueName: \"kubernetes.io/projected/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-kube-api-access-zpftt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9thzs\" (UID: \"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.291916 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9thzs\" (UID: \"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.296711 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9thzs\" (UID: \"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.296942 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9thzs\" (UID: \"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.323393 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpftt\" (UniqueName: \"kubernetes.io/projected/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-kube-api-access-zpftt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9thzs\" (UID: \"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" Nov 27 17:43:31 crc kubenswrapper[4792]: I1127 17:43:31.430062 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" Nov 27 17:43:32 crc kubenswrapper[4792]: I1127 17:43:32.021449 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs"] Nov 27 17:43:33 crc kubenswrapper[4792]: I1127 17:43:33.033088 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" event={"ID":"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9","Type":"ContainerStarted","Data":"9ac365f7166f02699af0f51d870ae4b5b1d85a9d422f3dbcb51d4084697384c3"} Nov 27 17:43:34 crc kubenswrapper[4792]: I1127 17:43:34.047495 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" event={"ID":"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9","Type":"ContainerStarted","Data":"1943d1355dce3e73c54c985abd06a6eff6cdea48e551aa5d3abac7a0c5109ca0"} Nov 27 17:43:34 crc kubenswrapper[4792]: I1127 17:43:34.071525 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" podStartSLOduration=1.950151409 podStartE2EDuration="3.071507601s" podCreationTimestamp="2025-11-27 17:43:31 +0000 UTC" firstStartedPulling="2025-11-27 17:43:32.036973259 +0000 UTC m=+2034.379799577" lastFinishedPulling="2025-11-27 17:43:33.158329461 +0000 UTC m=+2035.501155769" observedRunningTime="2025-11-27 17:43:34.070732802 +0000 UTC m=+2036.413559140" watchObservedRunningTime="2025-11-27 17:43:34.071507601 +0000 UTC m=+2036.414333929" Nov 27 17:43:38 crc kubenswrapper[4792]: I1127 17:43:38.290627 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:43:38 crc kubenswrapper[4792]: I1127 17:43:38.291257 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:43:38 crc kubenswrapper[4792]: I1127 17:43:38.291320 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:43:38 crc kubenswrapper[4792]: I1127 17:43:38.292551 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a39d10190b610649d44c98afe3563275dbc74e7d629b3db59b3d9af2418ae45"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 17:43:38 crc kubenswrapper[4792]: I1127 17:43:38.292682 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://8a39d10190b610649d44c98afe3563275dbc74e7d629b3db59b3d9af2418ae45" gracePeriod=600 Nov 27 17:43:39 crc kubenswrapper[4792]: I1127 17:43:39.123064 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="8a39d10190b610649d44c98afe3563275dbc74e7d629b3db59b3d9af2418ae45" exitCode=0 Nov 27 17:43:39 crc kubenswrapper[4792]: I1127 17:43:39.123174 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"8a39d10190b610649d44c98afe3563275dbc74e7d629b3db59b3d9af2418ae45"} Nov 27 17:43:39 crc kubenswrapper[4792]: I1127 17:43:39.123809 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d"} Nov 27 17:43:39 crc kubenswrapper[4792]: I1127 17:43:39.123845 4792 scope.go:117] "RemoveContainer" containerID="bc26b3fe94d7131bde5ff42fcf06ee3aa16fb334b9df8322fe558202f9d1fe09" Nov 27 17:43:40 crc kubenswrapper[4792]: I1127 17:43:40.139069 4792 generic.go:334] "Generic (PLEG): container finished" podID="4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9" containerID="1943d1355dce3e73c54c985abd06a6eff6cdea48e551aa5d3abac7a0c5109ca0" exitCode=0 Nov 27 17:43:40 crc kubenswrapper[4792]: I1127 17:43:40.139162 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" event={"ID":"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9","Type":"ContainerDied","Data":"1943d1355dce3e73c54c985abd06a6eff6cdea48e551aa5d3abac7a0c5109ca0"} Nov 27 17:43:41 crc kubenswrapper[4792]: I1127 17:43:41.049893 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-h9ssd"] Nov 27 17:43:41 crc kubenswrapper[4792]: I1127 17:43:41.062619 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-h9ssd"] Nov 27 17:43:41 crc kubenswrapper[4792]: I1127 17:43:41.642819 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" Nov 27 17:43:41 crc kubenswrapper[4792]: I1127 17:43:41.698589 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-ssh-key\") pod \"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9\" (UID: \"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9\") " Nov 27 17:43:41 crc kubenswrapper[4792]: I1127 17:43:41.698655 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpftt\" (UniqueName: \"kubernetes.io/projected/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-kube-api-access-zpftt\") pod \"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9\" (UID: \"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9\") " Nov 27 17:43:41 crc kubenswrapper[4792]: I1127 17:43:41.699952 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-inventory\") pod \"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9\" (UID: \"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9\") " Nov 27 17:43:41 crc kubenswrapper[4792]: I1127 17:43:41.708827 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-kube-api-access-zpftt" (OuterVolumeSpecName: "kube-api-access-zpftt") pod "4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9" (UID: "4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9"). InnerVolumeSpecName "kube-api-access-zpftt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:43:41 crc kubenswrapper[4792]: I1127 17:43:41.737836 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-inventory" (OuterVolumeSpecName: "inventory") pod "4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9" (UID: "4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:43:41 crc kubenswrapper[4792]: I1127 17:43:41.739843 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9" (UID: "4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:43:41 crc kubenswrapper[4792]: I1127 17:43:41.802447 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:43:41 crc kubenswrapper[4792]: I1127 17:43:41.802499 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:43:41 crc kubenswrapper[4792]: I1127 17:43:41.802515 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpftt\" (UniqueName: \"kubernetes.io/projected/4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9-kube-api-access-zpftt\") on node \"crc\" DevicePath \"\"" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.038476 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q6wwh"] Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.053255 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q6wwh"] Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.169421 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" event={"ID":"4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9","Type":"ContainerDied","Data":"9ac365f7166f02699af0f51d870ae4b5b1d85a9d422f3dbcb51d4084697384c3"} Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.169467 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ac365f7166f02699af0f51d870ae4b5b1d85a9d422f3dbcb51d4084697384c3" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.169499 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9thzs" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.249761 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9"] Nov 27 17:43:42 crc kubenswrapper[4792]: E1127 17:43:42.250513 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.250553 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.250903 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.251843 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.254082 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.254813 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.255395 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.264405 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.302369 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9"] Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.314583 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgsk8\" (UniqueName: \"kubernetes.io/projected/6dbb090d-6543-4de1-80f3-1a61798d7870-kube-api-access-hgsk8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nv2q9\" (UID: \"6dbb090d-6543-4de1-80f3-1a61798d7870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.314672 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbb090d-6543-4de1-80f3-1a61798d7870-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nv2q9\" (UID: \"6dbb090d-6543-4de1-80f3-1a61798d7870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.314752 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6dbb090d-6543-4de1-80f3-1a61798d7870-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nv2q9\" (UID: \"6dbb090d-6543-4de1-80f3-1a61798d7870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.417510 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgsk8\" (UniqueName: \"kubernetes.io/projected/6dbb090d-6543-4de1-80f3-1a61798d7870-kube-api-access-hgsk8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nv2q9\" (UID: \"6dbb090d-6543-4de1-80f3-1a61798d7870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.417691 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbb090d-6543-4de1-80f3-1a61798d7870-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nv2q9\" (UID: \"6dbb090d-6543-4de1-80f3-1a61798d7870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.417840 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6dbb090d-6543-4de1-80f3-1a61798d7870-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nv2q9\" (UID: \"6dbb090d-6543-4de1-80f3-1a61798d7870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.422420 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbb090d-6543-4de1-80f3-1a61798d7870-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nv2q9\" (UID: \"6dbb090d-6543-4de1-80f3-1a61798d7870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.422429 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6dbb090d-6543-4de1-80f3-1a61798d7870-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nv2q9\" (UID: \"6dbb090d-6543-4de1-80f3-1a61798d7870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.438887 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgsk8\" (UniqueName: \"kubernetes.io/projected/6dbb090d-6543-4de1-80f3-1a61798d7870-kube-api-access-hgsk8\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-nv2q9\" (UID: \"6dbb090d-6543-4de1-80f3-1a61798d7870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.574986 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.709970 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19f9765b-6579-4017-9ee9-dcf8f7829b19" path="/var/lib/kubelet/pods/19f9765b-6579-4017-9ee9-dcf8f7829b19/volumes" Nov 27 17:43:42 crc kubenswrapper[4792]: I1127 17:43:42.711144 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8333cb7e-8739-4af6-a1eb-775aa791fb82" path="/var/lib/kubelet/pods/8333cb7e-8739-4af6-a1eb-775aa791fb82/volumes" Nov 27 17:43:43 crc kubenswrapper[4792]: I1127 17:43:43.126732 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9"] Nov 27 17:43:43 crc kubenswrapper[4792]: W1127 17:43:43.139882 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dbb090d_6543_4de1_80f3_1a61798d7870.slice/crio-03485ef1402da753a91caf12306c4e4749a889783d6841f02204c17f88c4b5eb WatchSource:0}: Error finding container 03485ef1402da753a91caf12306c4e4749a889783d6841f02204c17f88c4b5eb: Status 404 returned error can't find the container with id 03485ef1402da753a91caf12306c4e4749a889783d6841f02204c17f88c4b5eb Nov 27 17:43:43 crc kubenswrapper[4792]: I1127 17:43:43.181970 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" event={"ID":"6dbb090d-6543-4de1-80f3-1a61798d7870","Type":"ContainerStarted","Data":"03485ef1402da753a91caf12306c4e4749a889783d6841f02204c17f88c4b5eb"} Nov 27 17:43:46 crc kubenswrapper[4792]: I1127 17:43:46.234679 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" event={"ID":"6dbb090d-6543-4de1-80f3-1a61798d7870","Type":"ContainerStarted","Data":"3308bde32fd43c6649a961d31ef6dffec09e212cccd07749ce5ff045cec98187"} Nov 27 17:43:46 crc kubenswrapper[4792]: I1127 17:43:46.308443 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" podStartSLOduration=2.412004123 podStartE2EDuration="4.308422172s" podCreationTimestamp="2025-11-27 17:43:42 +0000 UTC" firstStartedPulling="2025-11-27 17:43:43.143051285 +0000 UTC m=+2045.485877603" lastFinishedPulling="2025-11-27 17:43:45.039469334 +0000 UTC m=+2047.382295652" observedRunningTime="2025-11-27 17:43:46.295952524 +0000 UTC m=+2048.638778862" watchObservedRunningTime="2025-11-27 17:43:46.308422172 +0000 UTC m=+2048.651248490" Nov 27 17:43:47 crc kubenswrapper[4792]: I1127 17:43:47.036148 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-n4j6s"] Nov 27 17:43:47 crc kubenswrapper[4792]: I1127 17:43:47.051913 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-n4j6s"] Nov 27 17:43:48 crc kubenswrapper[4792]: I1127 17:43:48.052563 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-7573-account-create-update-xsrqm"] Nov 27 17:43:48 crc kubenswrapper[4792]: I1127 17:43:48.067113 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-7573-account-create-update-xsrqm"] Nov 27 17:43:48 crc kubenswrapper[4792]: I1127 17:43:48.719048 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eacffe2-c3ab-4816-9d52-3c73de2d37cf" path="/var/lib/kubelet/pods/0eacffe2-c3ab-4816-9d52-3c73de2d37cf/volumes" Nov 27 17:43:48 crc kubenswrapper[4792]: I1127 17:43:48.721487 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4be33e9-5259-40d3-9496-c1836cb67060" path="/var/lib/kubelet/pods/c4be33e9-5259-40d3-9496-c1836cb67060/volumes" Nov 27 17:44:04 crc kubenswrapper[4792]: I1127 17:44:04.102581 4792 scope.go:117] "RemoveContainer" containerID="87fe28e874f1e0d0ef67214423c214c2891dbf9306f72c0d2034c84ddd18d717" Nov 27 17:44:04 crc kubenswrapper[4792]: I1127 17:44:04.169091 4792 scope.go:117] "RemoveContainer" containerID="3daed44d41e87d5ace004f013ff7d2af8ad89f1a9a8bb93a827e2aeefadce950" Nov 27 17:44:04 crc kubenswrapper[4792]: I1127 17:44:04.227868 4792 scope.go:117] "RemoveContainer" containerID="9a2c9775a0039dab8d6364a5505dabdbda47779a7e42cb3903d19cb5a59a4dd8" Nov 27 17:44:04 crc kubenswrapper[4792]: I1127 17:44:04.310119 4792 scope.go:117] "RemoveContainer" containerID="0a719a2345f367160540bad33d2b8c13b0ffef85060d9d0865301a309e55ffd9" Nov 27 17:44:04 crc kubenswrapper[4792]: I1127 17:44:04.362003 4792 scope.go:117] "RemoveContainer" containerID="d150a6bbbbf4de430746d37d3faeabd0a9dd0c6e7745b9a8e5786f1e0b786204" Nov 27 17:44:26 crc kubenswrapper[4792]: I1127 17:44:26.725410 4792 generic.go:334] "Generic (PLEG): container finished" podID="6dbb090d-6543-4de1-80f3-1a61798d7870" containerID="3308bde32fd43c6649a961d31ef6dffec09e212cccd07749ce5ff045cec98187" exitCode=0 Nov 27 17:44:26 crc kubenswrapper[4792]: I1127 17:44:26.725557 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" event={"ID":"6dbb090d-6543-4de1-80f3-1a61798d7870","Type":"ContainerDied","Data":"3308bde32fd43c6649a961d31ef6dffec09e212cccd07749ce5ff045cec98187"} Nov 27 17:44:27 crc kubenswrapper[4792]: I1127 17:44:27.063081 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-p5x75"] Nov 27 17:44:27 crc kubenswrapper[4792]: I1127 17:44:27.082609 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-p5x75"] Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.233029 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.351347 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgsk8\" (UniqueName: \"kubernetes.io/projected/6dbb090d-6543-4de1-80f3-1a61798d7870-kube-api-access-hgsk8\") pod \"6dbb090d-6543-4de1-80f3-1a61798d7870\" (UID: \"6dbb090d-6543-4de1-80f3-1a61798d7870\") " Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.351726 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6dbb090d-6543-4de1-80f3-1a61798d7870-ssh-key\") pod \"6dbb090d-6543-4de1-80f3-1a61798d7870\" (UID: \"6dbb090d-6543-4de1-80f3-1a61798d7870\") " Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.351858 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbb090d-6543-4de1-80f3-1a61798d7870-inventory\") pod \"6dbb090d-6543-4de1-80f3-1a61798d7870\" (UID: \"6dbb090d-6543-4de1-80f3-1a61798d7870\") " Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.358881 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dbb090d-6543-4de1-80f3-1a61798d7870-kube-api-access-hgsk8" (OuterVolumeSpecName: "kube-api-access-hgsk8") pod "6dbb090d-6543-4de1-80f3-1a61798d7870" (UID: "6dbb090d-6543-4de1-80f3-1a61798d7870"). InnerVolumeSpecName "kube-api-access-hgsk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.393904 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbb090d-6543-4de1-80f3-1a61798d7870-inventory" (OuterVolumeSpecName: "inventory") pod "6dbb090d-6543-4de1-80f3-1a61798d7870" (UID: "6dbb090d-6543-4de1-80f3-1a61798d7870"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.398704 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbb090d-6543-4de1-80f3-1a61798d7870-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6dbb090d-6543-4de1-80f3-1a61798d7870" (UID: "6dbb090d-6543-4de1-80f3-1a61798d7870"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.455525 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbb090d-6543-4de1-80f3-1a61798d7870-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.455565 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgsk8\" (UniqueName: \"kubernetes.io/projected/6dbb090d-6543-4de1-80f3-1a61798d7870-kube-api-access-hgsk8\") on node \"crc\" DevicePath \"\"" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.455579 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6dbb090d-6543-4de1-80f3-1a61798d7870-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.711076 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfa9c9c8-933b-4098-ab35-e8d83489a194" path="/var/lib/kubelet/pods/dfa9c9c8-933b-4098-ab35-e8d83489a194/volumes" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.810198 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" event={"ID":"6dbb090d-6543-4de1-80f3-1a61798d7870","Type":"ContainerDied","Data":"03485ef1402da753a91caf12306c4e4749a889783d6841f02204c17f88c4b5eb"} Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.810255 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03485ef1402da753a91caf12306c4e4749a889783d6841f02204c17f88c4b5eb" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.810386 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-nv2q9" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.862527 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r"] Nov 27 17:44:28 crc kubenswrapper[4792]: E1127 17:44:28.863358 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbb090d-6543-4de1-80f3-1a61798d7870" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.863380 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbb090d-6543-4de1-80f3-1a61798d7870" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.863622 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dbb090d-6543-4de1-80f3-1a61798d7870" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.864828 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.867707 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.874861 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r"] Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.869024 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.869174 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.869210 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.987896 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r\" (UID: \"97d6e3f1-b04e-4f38-b104-3f74f8ed4683\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.988017 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r\" (UID: \"97d6e3f1-b04e-4f38-b104-3f74f8ed4683\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" Nov 27 17:44:28 crc kubenswrapper[4792]: I1127 17:44:28.988075 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw82c\" (UniqueName: \"kubernetes.io/projected/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-kube-api-access-cw82c\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r\" (UID: \"97d6e3f1-b04e-4f38-b104-3f74f8ed4683\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" Nov 27 17:44:29 crc kubenswrapper[4792]: I1127 17:44:29.092247 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r\" (UID: \"97d6e3f1-b04e-4f38-b104-3f74f8ed4683\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" Nov 27 17:44:29 crc kubenswrapper[4792]: I1127 17:44:29.092393 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw82c\" (UniqueName: \"kubernetes.io/projected/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-kube-api-access-cw82c\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r\" (UID: \"97d6e3f1-b04e-4f38-b104-3f74f8ed4683\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" Nov 27 17:44:29 crc kubenswrapper[4792]: I1127 17:44:29.092609 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r\" (UID: \"97d6e3f1-b04e-4f38-b104-3f74f8ed4683\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" Nov 27 17:44:29 crc kubenswrapper[4792]: I1127 17:44:29.096076 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r\" (UID: \"97d6e3f1-b04e-4f38-b104-3f74f8ed4683\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" Nov 27 17:44:29 crc kubenswrapper[4792]: I1127 17:44:29.096554 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r\" (UID: \"97d6e3f1-b04e-4f38-b104-3f74f8ed4683\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" Nov 27 17:44:29 crc kubenswrapper[4792]: I1127 17:44:29.125000 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw82c\" (UniqueName: \"kubernetes.io/projected/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-kube-api-access-cw82c\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r\" (UID: \"97d6e3f1-b04e-4f38-b104-3f74f8ed4683\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" Nov 27 17:44:29 crc kubenswrapper[4792]: I1127 17:44:29.203208 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" Nov 27 17:44:29 crc kubenswrapper[4792]: I1127 17:44:29.808817 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r"] Nov 27 17:44:29 crc kubenswrapper[4792]: I1127 17:44:29.827778 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" event={"ID":"97d6e3f1-b04e-4f38-b104-3f74f8ed4683","Type":"ContainerStarted","Data":"8dba129cd48ae904c46146b51b723c2058e809b497e713e3b28a91cd7377a2b9"} Nov 27 17:44:30 crc kubenswrapper[4792]: I1127 17:44:30.843268 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" event={"ID":"97d6e3f1-b04e-4f38-b104-3f74f8ed4683","Type":"ContainerStarted","Data":"7404b2e7fcf6b3608988f27c61fbbd118e258341f0f365471d482542185e34ce"} Nov 27 17:44:30 crc kubenswrapper[4792]: I1127 17:44:30.870021 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" podStartSLOduration=2.38749671 podStartE2EDuration="2.870000657s" podCreationTimestamp="2025-11-27 17:44:28 +0000 UTC" firstStartedPulling="2025-11-27 17:44:29.808106665 +0000 UTC m=+2092.150933003" lastFinishedPulling="2025-11-27 17:44:30.290610622 +0000 UTC m=+2092.633436950" observedRunningTime="2025-11-27 17:44:30.862568042 +0000 UTC m=+2093.205394360" watchObservedRunningTime="2025-11-27 17:44:30.870000657 +0000 UTC m=+2093.212826985" Nov 27 17:45:00 crc kubenswrapper[4792]: I1127 17:45:00.165637 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv"] Nov 27 17:45:00 crc kubenswrapper[4792]: I1127 17:45:00.168050 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" Nov 27 17:45:00 crc kubenswrapper[4792]: I1127 17:45:00.172536 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 27 17:45:00 crc kubenswrapper[4792]: I1127 17:45:00.172550 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 27 17:45:00 crc kubenswrapper[4792]: I1127 17:45:00.178469 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv"] Nov 27 17:45:00 crc kubenswrapper[4792]: I1127 17:45:00.302829 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dqbc\" (UniqueName: \"kubernetes.io/projected/177d378f-85ae-40d3-9c3f-3bfb6a40790a-kube-api-access-5dqbc\") pod \"collect-profiles-29404425-mv5bv\" (UID: \"177d378f-85ae-40d3-9c3f-3bfb6a40790a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" Nov 27 17:45:00 crc kubenswrapper[4792]: I1127 17:45:00.303935 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/177d378f-85ae-40d3-9c3f-3bfb6a40790a-secret-volume\") pod \"collect-profiles-29404425-mv5bv\" (UID: \"177d378f-85ae-40d3-9c3f-3bfb6a40790a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" Nov 27 17:45:00 crc kubenswrapper[4792]: I1127 17:45:00.303983 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/177d378f-85ae-40d3-9c3f-3bfb6a40790a-config-volume\") pod \"collect-profiles-29404425-mv5bv\" (UID: \"177d378f-85ae-40d3-9c3f-3bfb6a40790a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" Nov 27 17:45:00 crc kubenswrapper[4792]: I1127 17:45:00.406193 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/177d378f-85ae-40d3-9c3f-3bfb6a40790a-secret-volume\") pod \"collect-profiles-29404425-mv5bv\" (UID: \"177d378f-85ae-40d3-9c3f-3bfb6a40790a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" Nov 27 17:45:00 crc kubenswrapper[4792]: I1127 17:45:00.406265 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/177d378f-85ae-40d3-9c3f-3bfb6a40790a-config-volume\") pod \"collect-profiles-29404425-mv5bv\" (UID: \"177d378f-85ae-40d3-9c3f-3bfb6a40790a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" Nov 27 17:45:00 crc kubenswrapper[4792]: I1127 17:45:00.406425 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dqbc\" (UniqueName: \"kubernetes.io/projected/177d378f-85ae-40d3-9c3f-3bfb6a40790a-kube-api-access-5dqbc\") pod \"collect-profiles-29404425-mv5bv\" (UID: \"177d378f-85ae-40d3-9c3f-3bfb6a40790a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" Nov 27 17:45:00 crc kubenswrapper[4792]: I1127 17:45:00.407239 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/177d378f-85ae-40d3-9c3f-3bfb6a40790a-config-volume\") pod \"collect-profiles-29404425-mv5bv\" (UID: \"177d378f-85ae-40d3-9c3f-3bfb6a40790a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" Nov 27 17:45:00 crc kubenswrapper[4792]: I1127 17:45:00.411419 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/177d378f-85ae-40d3-9c3f-3bfb6a40790a-secret-volume\") pod \"collect-profiles-29404425-mv5bv\" (UID: \"177d378f-85ae-40d3-9c3f-3bfb6a40790a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" Nov 27 17:45:00 crc kubenswrapper[4792]: I1127 17:45:00.426388 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dqbc\" (UniqueName: \"kubernetes.io/projected/177d378f-85ae-40d3-9c3f-3bfb6a40790a-kube-api-access-5dqbc\") pod \"collect-profiles-29404425-mv5bv\" (UID: \"177d378f-85ae-40d3-9c3f-3bfb6a40790a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" Nov 27 17:45:00 crc kubenswrapper[4792]: I1127 17:45:00.493258 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" Nov 27 17:45:01 crc kubenswrapper[4792]: I1127 17:45:01.002944 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv"] Nov 27 17:45:01 crc kubenswrapper[4792]: I1127 17:45:01.238925 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" event={"ID":"177d378f-85ae-40d3-9c3f-3bfb6a40790a","Type":"ContainerStarted","Data":"347edecdcc8f141cbac11b64ed1f182272768af5d117d697270b68a2af97839e"} Nov 27 17:45:02 crc kubenswrapper[4792]: I1127 17:45:02.252361 4792 generic.go:334] "Generic (PLEG): container finished" podID="177d378f-85ae-40d3-9c3f-3bfb6a40790a" containerID="6534cb2584a0fe1d498b7388b428986618bd726ab754c1bebe846cc1ad731382" exitCode=0 Nov 27 17:45:02 crc kubenswrapper[4792]: I1127 17:45:02.252626 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" event={"ID":"177d378f-85ae-40d3-9c3f-3bfb6a40790a","Type":"ContainerDied","Data":"6534cb2584a0fe1d498b7388b428986618bd726ab754c1bebe846cc1ad731382"} Nov 27 17:45:03 crc kubenswrapper[4792]: I1127 17:45:03.808068 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" Nov 27 17:45:03 crc kubenswrapper[4792]: I1127 17:45:03.901473 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/177d378f-85ae-40d3-9c3f-3bfb6a40790a-config-volume\") pod \"177d378f-85ae-40d3-9c3f-3bfb6a40790a\" (UID: \"177d378f-85ae-40d3-9c3f-3bfb6a40790a\") " Nov 27 17:45:03 crc kubenswrapper[4792]: I1127 17:45:03.901566 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/177d378f-85ae-40d3-9c3f-3bfb6a40790a-secret-volume\") pod \"177d378f-85ae-40d3-9c3f-3bfb6a40790a\" (UID: \"177d378f-85ae-40d3-9c3f-3bfb6a40790a\") " Nov 27 17:45:03 crc kubenswrapper[4792]: I1127 17:45:03.901745 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dqbc\" (UniqueName: \"kubernetes.io/projected/177d378f-85ae-40d3-9c3f-3bfb6a40790a-kube-api-access-5dqbc\") pod \"177d378f-85ae-40d3-9c3f-3bfb6a40790a\" (UID: \"177d378f-85ae-40d3-9c3f-3bfb6a40790a\") " Nov 27 17:45:03 crc kubenswrapper[4792]: I1127 17:45:03.902224 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/177d378f-85ae-40d3-9c3f-3bfb6a40790a-config-volume" (OuterVolumeSpecName: "config-volume") pod "177d378f-85ae-40d3-9c3f-3bfb6a40790a" (UID: "177d378f-85ae-40d3-9c3f-3bfb6a40790a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:45:03 crc kubenswrapper[4792]: I1127 17:45:03.902679 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/177d378f-85ae-40d3-9c3f-3bfb6a40790a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 27 17:45:03 crc kubenswrapper[4792]: I1127 17:45:03.909204 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177d378f-85ae-40d3-9c3f-3bfb6a40790a-kube-api-access-5dqbc" (OuterVolumeSpecName: "kube-api-access-5dqbc") pod "177d378f-85ae-40d3-9c3f-3bfb6a40790a" (UID: "177d378f-85ae-40d3-9c3f-3bfb6a40790a"). InnerVolumeSpecName "kube-api-access-5dqbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:45:03 crc kubenswrapper[4792]: I1127 17:45:03.909887 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/177d378f-85ae-40d3-9c3f-3bfb6a40790a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "177d378f-85ae-40d3-9c3f-3bfb6a40790a" (UID: "177d378f-85ae-40d3-9c3f-3bfb6a40790a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:45:04 crc kubenswrapper[4792]: I1127 17:45:04.008410 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/177d378f-85ae-40d3-9c3f-3bfb6a40790a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 27 17:45:04 crc kubenswrapper[4792]: I1127 17:45:04.008485 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dqbc\" (UniqueName: \"kubernetes.io/projected/177d378f-85ae-40d3-9c3f-3bfb6a40790a-kube-api-access-5dqbc\") on node \"crc\" DevicePath \"\"" Nov 27 17:45:04 crc kubenswrapper[4792]: I1127 17:45:04.280556 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" event={"ID":"177d378f-85ae-40d3-9c3f-3bfb6a40790a","Type":"ContainerDied","Data":"347edecdcc8f141cbac11b64ed1f182272768af5d117d697270b68a2af97839e"} Nov 27 17:45:04 crc kubenswrapper[4792]: I1127 17:45:04.280611 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="347edecdcc8f141cbac11b64ed1f182272768af5d117d697270b68a2af97839e" Nov 27 17:45:04 crc kubenswrapper[4792]: I1127 17:45:04.280704 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv" Nov 27 17:45:04 crc kubenswrapper[4792]: I1127 17:45:04.572227 4792 scope.go:117] "RemoveContainer" containerID="a9ab281c068c43096c38ee9657bb793a033586fb1d91b6c27d90e5fe50e2dafa" Nov 27 17:45:04 crc kubenswrapper[4792]: I1127 17:45:04.893272 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq"] Nov 27 17:45:04 crc kubenswrapper[4792]: I1127 17:45:04.914868 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404380-47pdq"] Nov 27 17:45:06 crc kubenswrapper[4792]: I1127 17:45:06.704094 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e202a3-1f47-409c-83f2-a066ddb1ffe2" path="/var/lib/kubelet/pods/07e202a3-1f47-409c-83f2-a066ddb1ffe2/volumes" Nov 27 17:45:23 crc kubenswrapper[4792]: I1127 17:45:23.498805 4792 generic.go:334] "Generic (PLEG): container finished" podID="97d6e3f1-b04e-4f38-b104-3f74f8ed4683" containerID="7404b2e7fcf6b3608988f27c61fbbd118e258341f0f365471d482542185e34ce" exitCode=0 Nov 27 17:45:23 crc kubenswrapper[4792]: I1127 17:45:23.498851 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" event={"ID":"97d6e3f1-b04e-4f38-b104-3f74f8ed4683","Type":"ContainerDied","Data":"7404b2e7fcf6b3608988f27c61fbbd118e258341f0f365471d482542185e34ce"} Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.092974 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.227292 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-ssh-key\") pod \"97d6e3f1-b04e-4f38-b104-3f74f8ed4683\" (UID: \"97d6e3f1-b04e-4f38-b104-3f74f8ed4683\") " Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.227387 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-inventory\") pod \"97d6e3f1-b04e-4f38-b104-3f74f8ed4683\" (UID: \"97d6e3f1-b04e-4f38-b104-3f74f8ed4683\") " Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.227620 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw82c\" (UniqueName: \"kubernetes.io/projected/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-kube-api-access-cw82c\") pod \"97d6e3f1-b04e-4f38-b104-3f74f8ed4683\" (UID: \"97d6e3f1-b04e-4f38-b104-3f74f8ed4683\") " Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.236129 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-kube-api-access-cw82c" (OuterVolumeSpecName: "kube-api-access-cw82c") pod "97d6e3f1-b04e-4f38-b104-3f74f8ed4683" (UID: "97d6e3f1-b04e-4f38-b104-3f74f8ed4683"). InnerVolumeSpecName "kube-api-access-cw82c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.261609 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-inventory" (OuterVolumeSpecName: "inventory") pod "97d6e3f1-b04e-4f38-b104-3f74f8ed4683" (UID: "97d6e3f1-b04e-4f38-b104-3f74f8ed4683"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.300272 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "97d6e3f1-b04e-4f38-b104-3f74f8ed4683" (UID: "97d6e3f1-b04e-4f38-b104-3f74f8ed4683"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.330812 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw82c\" (UniqueName: \"kubernetes.io/projected/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-kube-api-access-cw82c\") on node \"crc\" DevicePath \"\"" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.330833 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.330851 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97d6e3f1-b04e-4f38-b104-3f74f8ed4683-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.524025 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" event={"ID":"97d6e3f1-b04e-4f38-b104-3f74f8ed4683","Type":"ContainerDied","Data":"8dba129cd48ae904c46146b51b723c2058e809b497e713e3b28a91cd7377a2b9"} Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.524368 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dba129cd48ae904c46146b51b723c2058e809b497e713e3b28a91cd7377a2b9" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.524130 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.626067 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hjr4g"] Nov 27 17:45:25 crc kubenswrapper[4792]: E1127 17:45:25.626548 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d6e3f1-b04e-4f38-b104-3f74f8ed4683" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.626565 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d6e3f1-b04e-4f38-b104-3f74f8ed4683" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 27 17:45:25 crc kubenswrapper[4792]: E1127 17:45:25.626578 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177d378f-85ae-40d3-9c3f-3bfb6a40790a" containerName="collect-profiles" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.626584 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="177d378f-85ae-40d3-9c3f-3bfb6a40790a" containerName="collect-profiles" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.626949 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d6e3f1-b04e-4f38-b104-3f74f8ed4683" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.626980 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="177d378f-85ae-40d3-9c3f-3bfb6a40790a" containerName="collect-profiles" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.627731 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.629772 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.630114 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.630181 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.630366 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.640591 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hjr4g"] Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.739712 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpsj5\" (UniqueName: \"kubernetes.io/projected/72c0a753-4805-42a5-9b41-4fc97aad561b-kube-api-access-hpsj5\") pod \"ssh-known-hosts-edpm-deployment-hjr4g\" (UID: \"72c0a753-4805-42a5-9b41-4fc97aad561b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.739782 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/72c0a753-4805-42a5-9b41-4fc97aad561b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hjr4g\" (UID: \"72c0a753-4805-42a5-9b41-4fc97aad561b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.739873 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72c0a753-4805-42a5-9b41-4fc97aad561b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hjr4g\" (UID: \"72c0a753-4805-42a5-9b41-4fc97aad561b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.842253 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpsj5\" (UniqueName: \"kubernetes.io/projected/72c0a753-4805-42a5-9b41-4fc97aad561b-kube-api-access-hpsj5\") pod \"ssh-known-hosts-edpm-deployment-hjr4g\" (UID: \"72c0a753-4805-42a5-9b41-4fc97aad561b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.842344 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/72c0a753-4805-42a5-9b41-4fc97aad561b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hjr4g\" (UID: \"72c0a753-4805-42a5-9b41-4fc97aad561b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.842460 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72c0a753-4805-42a5-9b41-4fc97aad561b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hjr4g\" (UID: \"72c0a753-4805-42a5-9b41-4fc97aad561b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.847456 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/72c0a753-4805-42a5-9b41-4fc97aad561b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hjr4g\" (UID: \"72c0a753-4805-42a5-9b41-4fc97aad561b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.859703 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72c0a753-4805-42a5-9b41-4fc97aad561b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hjr4g\" (UID: \"72c0a753-4805-42a5-9b41-4fc97aad561b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" Nov 27 17:45:25 crc kubenswrapper[4792]: I1127 17:45:25.865471 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpsj5\" (UniqueName: \"kubernetes.io/projected/72c0a753-4805-42a5-9b41-4fc97aad561b-kube-api-access-hpsj5\") pod \"ssh-known-hosts-edpm-deployment-hjr4g\" (UID: \"72c0a753-4805-42a5-9b41-4fc97aad561b\") " pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" Nov 27 17:45:26 crc kubenswrapper[4792]: I1127 17:45:26.011200 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" Nov 27 17:45:26 crc kubenswrapper[4792]: I1127 17:45:26.624359 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 17:45:26 crc kubenswrapper[4792]: I1127 17:45:26.629258 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hjr4g"] Nov 27 17:45:27 crc kubenswrapper[4792]: I1127 17:45:27.543511 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" event={"ID":"72c0a753-4805-42a5-9b41-4fc97aad561b","Type":"ContainerStarted","Data":"e1180ab277fb0636a6f3f0e29c1bb65aa8d6818fe7aa850b65d51ed63aa2e831"} Nov 27 17:45:28 crc kubenswrapper[4792]: I1127 17:45:28.555144 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" event={"ID":"72c0a753-4805-42a5-9b41-4fc97aad561b","Type":"ContainerStarted","Data":"b7521cd9aa8d58f5179bb7234bee5691d32b26101918b64a63072de4984a3381"} Nov 27 17:45:28 crc kubenswrapper[4792]: I1127 17:45:28.578107 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" podStartSLOduration=2.824510135 podStartE2EDuration="3.578082947s" podCreationTimestamp="2025-11-27 17:45:25 +0000 UTC" firstStartedPulling="2025-11-27 17:45:26.62402799 +0000 UTC m=+2148.966854308" lastFinishedPulling="2025-11-27 17:45:27.377600762 +0000 UTC m=+2149.720427120" observedRunningTime="2025-11-27 17:45:28.569238317 +0000 UTC m=+2150.912064645" watchObservedRunningTime="2025-11-27 17:45:28.578082947 +0000 UTC m=+2150.920909275" Nov 27 17:45:34 crc kubenswrapper[4792]: I1127 17:45:34.641538 4792 generic.go:334] "Generic (PLEG): container finished" podID="72c0a753-4805-42a5-9b41-4fc97aad561b" containerID="b7521cd9aa8d58f5179bb7234bee5691d32b26101918b64a63072de4984a3381" exitCode=0 Nov 27 17:45:34 crc kubenswrapper[4792]: I1127 17:45:34.641888 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" event={"ID":"72c0a753-4805-42a5-9b41-4fc97aad561b","Type":"ContainerDied","Data":"b7521cd9aa8d58f5179bb7234bee5691d32b26101918b64a63072de4984a3381"} Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.172585 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.298487 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpsj5\" (UniqueName: \"kubernetes.io/projected/72c0a753-4805-42a5-9b41-4fc97aad561b-kube-api-access-hpsj5\") pod \"72c0a753-4805-42a5-9b41-4fc97aad561b\" (UID: \"72c0a753-4805-42a5-9b41-4fc97aad561b\") " Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.298549 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72c0a753-4805-42a5-9b41-4fc97aad561b-ssh-key-openstack-edpm-ipam\") pod \"72c0a753-4805-42a5-9b41-4fc97aad561b\" (UID: \"72c0a753-4805-42a5-9b41-4fc97aad561b\") " Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.298713 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/72c0a753-4805-42a5-9b41-4fc97aad561b-inventory-0\") pod \"72c0a753-4805-42a5-9b41-4fc97aad561b\" (UID: \"72c0a753-4805-42a5-9b41-4fc97aad561b\") " Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.305260 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c0a753-4805-42a5-9b41-4fc97aad561b-kube-api-access-hpsj5" (OuterVolumeSpecName: "kube-api-access-hpsj5") pod "72c0a753-4805-42a5-9b41-4fc97aad561b" (UID: "72c0a753-4805-42a5-9b41-4fc97aad561b"). InnerVolumeSpecName "kube-api-access-hpsj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.344770 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c0a753-4805-42a5-9b41-4fc97aad561b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "72c0a753-4805-42a5-9b41-4fc97aad561b" (UID: "72c0a753-4805-42a5-9b41-4fc97aad561b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.360904 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c0a753-4805-42a5-9b41-4fc97aad561b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "72c0a753-4805-42a5-9b41-4fc97aad561b" (UID: "72c0a753-4805-42a5-9b41-4fc97aad561b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.402830 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpsj5\" (UniqueName: \"kubernetes.io/projected/72c0a753-4805-42a5-9b41-4fc97aad561b-kube-api-access-hpsj5\") on node \"crc\" DevicePath \"\"" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.402873 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72c0a753-4805-42a5-9b41-4fc97aad561b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.402893 4792 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/72c0a753-4805-42a5-9b41-4fc97aad561b-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.667879 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" event={"ID":"72c0a753-4805-42a5-9b41-4fc97aad561b","Type":"ContainerDied","Data":"e1180ab277fb0636a6f3f0e29c1bb65aa8d6818fe7aa850b65d51ed63aa2e831"} Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.667923 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1180ab277fb0636a6f3f0e29c1bb65aa8d6818fe7aa850b65d51ed63aa2e831" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.667993 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hjr4g" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.744802 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm"] Nov 27 17:45:36 crc kubenswrapper[4792]: E1127 17:45:36.745543 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c0a753-4805-42a5-9b41-4fc97aad561b" containerName="ssh-known-hosts-edpm-deployment" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.745561 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c0a753-4805-42a5-9b41-4fc97aad561b" containerName="ssh-known-hosts-edpm-deployment" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.745981 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c0a753-4805-42a5-9b41-4fc97aad561b" containerName="ssh-known-hosts-edpm-deployment" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.747150 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.752852 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.752964 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.753067 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.753136 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.780583 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm"] Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.814904 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb7dh\" (UniqueName: \"kubernetes.io/projected/1c6f6f25-0120-4355-9803-5e7b6743588b-kube-api-access-gb7dh\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-28hzm\" (UID: \"1c6f6f25-0120-4355-9803-5e7b6743588b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.815225 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1c6f6f25-0120-4355-9803-5e7b6743588b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-28hzm\" (UID: \"1c6f6f25-0120-4355-9803-5e7b6743588b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.815289 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6f6f25-0120-4355-9803-5e7b6743588b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-28hzm\" (UID: \"1c6f6f25-0120-4355-9803-5e7b6743588b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.917373 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb7dh\" (UniqueName: \"kubernetes.io/projected/1c6f6f25-0120-4355-9803-5e7b6743588b-kube-api-access-gb7dh\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-28hzm\" (UID: \"1c6f6f25-0120-4355-9803-5e7b6743588b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.917473 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1c6f6f25-0120-4355-9803-5e7b6743588b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-28hzm\" (UID: \"1c6f6f25-0120-4355-9803-5e7b6743588b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.917501 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6f6f25-0120-4355-9803-5e7b6743588b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-28hzm\" (UID: \"1c6f6f25-0120-4355-9803-5e7b6743588b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.921166 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1c6f6f25-0120-4355-9803-5e7b6743588b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-28hzm\" (UID: \"1c6f6f25-0120-4355-9803-5e7b6743588b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.922082 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6f6f25-0120-4355-9803-5e7b6743588b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-28hzm\" (UID: \"1c6f6f25-0120-4355-9803-5e7b6743588b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" Nov 27 17:45:36 crc kubenswrapper[4792]: I1127 17:45:36.943505 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb7dh\" (UniqueName: \"kubernetes.io/projected/1c6f6f25-0120-4355-9803-5e7b6743588b-kube-api-access-gb7dh\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-28hzm\" (UID: \"1c6f6f25-0120-4355-9803-5e7b6743588b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" Nov 27 17:45:37 crc kubenswrapper[4792]: I1127 17:45:37.073255 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" Nov 27 17:45:37 crc kubenswrapper[4792]: I1127 17:45:37.791842 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm"] Nov 27 17:45:38 crc kubenswrapper[4792]: I1127 17:45:38.290112 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:45:38 crc kubenswrapper[4792]: I1127 17:45:38.290170 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:45:38 crc kubenswrapper[4792]: I1127 17:45:38.709500 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" event={"ID":"1c6f6f25-0120-4355-9803-5e7b6743588b","Type":"ContainerStarted","Data":"a8c6edec21b338a43b34654c4e48f8b2187da1ec3c5bac645fe28173d8ff0655"} Nov 27 17:45:38 crc kubenswrapper[4792]: I1127 17:45:38.961548 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:45:39 crc kubenswrapper[4792]: I1127 17:45:39.721765 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" event={"ID":"1c6f6f25-0120-4355-9803-5e7b6743588b","Type":"ContainerStarted","Data":"cc16802ae6c5649ad1de0f79736e0fcc2dcc0481eeba3c31aa297169b52d0960"} Nov 27 17:45:39 crc kubenswrapper[4792]: I1127 17:45:39.751276 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" podStartSLOduration=2.599319525 podStartE2EDuration="3.751254564s" podCreationTimestamp="2025-11-27 17:45:36 +0000 UTC" firstStartedPulling="2025-11-27 17:45:37.807060002 +0000 UTC m=+2160.149886320" lastFinishedPulling="2025-11-27 17:45:38.958995031 +0000 UTC m=+2161.301821359" observedRunningTime="2025-11-27 17:45:39.743898291 +0000 UTC m=+2162.086724609" watchObservedRunningTime="2025-11-27 17:45:39.751254564 +0000 UTC m=+2162.094080892" Nov 27 17:45:41 crc kubenswrapper[4792]: I1127 17:45:41.223112 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pwhqp"] Nov 27 17:45:41 crc kubenswrapper[4792]: I1127 17:45:41.226326 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:45:41 crc kubenswrapper[4792]: I1127 17:45:41.264718 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwhqp"] Nov 27 17:45:41 crc kubenswrapper[4792]: I1127 17:45:41.343143 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-utilities\") pod \"redhat-operators-pwhqp\" (UID: \"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60\") " pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:45:41 crc kubenswrapper[4792]: I1127 17:45:41.343576 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-catalog-content\") pod \"redhat-operators-pwhqp\" (UID: \"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60\") " pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:45:41 crc kubenswrapper[4792]: I1127 17:45:41.343601 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wn5s\" (UniqueName: \"kubernetes.io/projected/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-kube-api-access-9wn5s\") pod \"redhat-operators-pwhqp\" (UID: \"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60\") " pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:45:41 crc kubenswrapper[4792]: I1127 17:45:41.446301 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-catalog-content\") pod \"redhat-operators-pwhqp\" (UID: \"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60\") " pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:45:41 crc kubenswrapper[4792]: I1127 17:45:41.446359 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wn5s\" (UniqueName: \"kubernetes.io/projected/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-kube-api-access-9wn5s\") pod \"redhat-operators-pwhqp\" (UID: \"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60\") " pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:45:41 crc kubenswrapper[4792]: I1127 17:45:41.446433 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-utilities\") pod \"redhat-operators-pwhqp\" (UID: \"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60\") " pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:45:41 crc kubenswrapper[4792]: I1127 17:45:41.446952 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-utilities\") pod \"redhat-operators-pwhqp\" (UID: \"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60\") " pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:45:41 crc kubenswrapper[4792]: I1127 17:45:41.447161 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-catalog-content\") pod \"redhat-operators-pwhqp\" (UID: \"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60\") " pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:45:41 crc kubenswrapper[4792]: I1127 17:45:41.467408 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wn5s\" (UniqueName: \"kubernetes.io/projected/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-kube-api-access-9wn5s\") pod \"redhat-operators-pwhqp\" (UID: \"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60\") " pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:45:41 crc kubenswrapper[4792]: I1127 17:45:41.553710 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:45:42 crc kubenswrapper[4792]: I1127 17:45:42.096067 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwhqp"] Nov 27 17:45:42 crc kubenswrapper[4792]: W1127 17:45:42.099900 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c69321a_8f4d_46b6_adc6_c5dcd89ddc60.slice/crio-3fad23cd405536b2bd8cc5ce2c384517196fdf97784fda0882630e747aa4d904 WatchSource:0}: Error finding container 3fad23cd405536b2bd8cc5ce2c384517196fdf97784fda0882630e747aa4d904: Status 404 returned error can't find the container with id 3fad23cd405536b2bd8cc5ce2c384517196fdf97784fda0882630e747aa4d904 Nov 27 17:45:42 crc kubenswrapper[4792]: I1127 17:45:42.752415 4792 generic.go:334] "Generic (PLEG): container finished" podID="4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" containerID="f717ace21416662d4eba639fd4e7e0e0ff8db51be0b171b8276802059fedf257" exitCode=0 Nov 27 17:45:42 crc kubenswrapper[4792]: I1127 17:45:42.752623 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwhqp" event={"ID":"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60","Type":"ContainerDied","Data":"f717ace21416662d4eba639fd4e7e0e0ff8db51be0b171b8276802059fedf257"} Nov 27 17:45:42 crc kubenswrapper[4792]: I1127 17:45:42.752739 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwhqp" event={"ID":"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60","Type":"ContainerStarted","Data":"3fad23cd405536b2bd8cc5ce2c384517196fdf97784fda0882630e747aa4d904"} Nov 27 17:45:43 crc kubenswrapper[4792]: I1127 17:45:43.767606 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwhqp" event={"ID":"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60","Type":"ContainerStarted","Data":"971a8f4f765c20d0ac36244f3fe2a448674d0f13efecde148f97bccf3d01eea0"} Nov 27 17:45:48 crc kubenswrapper[4792]: I1127 17:45:48.845709 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" event={"ID":"1c6f6f25-0120-4355-9803-5e7b6743588b","Type":"ContainerDied","Data":"cc16802ae6c5649ad1de0f79736e0fcc2dcc0481eeba3c31aa297169b52d0960"} Nov 27 17:45:48 crc kubenswrapper[4792]: I1127 17:45:48.836627 4792 generic.go:334] "Generic (PLEG): container finished" podID="1c6f6f25-0120-4355-9803-5e7b6743588b" containerID="cc16802ae6c5649ad1de0f79736e0fcc2dcc0481eeba3c31aa297169b52d0960" exitCode=0 Nov 27 17:45:49 crc kubenswrapper[4792]: I1127 17:45:49.863079 4792 generic.go:334] "Generic (PLEG): container finished" podID="4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" containerID="971a8f4f765c20d0ac36244f3fe2a448674d0f13efecde148f97bccf3d01eea0" exitCode=0 Nov 27 17:45:49 crc kubenswrapper[4792]: I1127 17:45:49.863126 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwhqp" event={"ID":"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60","Type":"ContainerDied","Data":"971a8f4f765c20d0ac36244f3fe2a448674d0f13efecde148f97bccf3d01eea0"} Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.465370 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.596499 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1c6f6f25-0120-4355-9803-5e7b6743588b-ssh-key\") pod \"1c6f6f25-0120-4355-9803-5e7b6743588b\" (UID: \"1c6f6f25-0120-4355-9803-5e7b6743588b\") " Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.596593 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb7dh\" (UniqueName: \"kubernetes.io/projected/1c6f6f25-0120-4355-9803-5e7b6743588b-kube-api-access-gb7dh\") pod \"1c6f6f25-0120-4355-9803-5e7b6743588b\" (UID: \"1c6f6f25-0120-4355-9803-5e7b6743588b\") " Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.596700 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6f6f25-0120-4355-9803-5e7b6743588b-inventory\") pod \"1c6f6f25-0120-4355-9803-5e7b6743588b\" (UID: \"1c6f6f25-0120-4355-9803-5e7b6743588b\") " Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.624235 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6f6f25-0120-4355-9803-5e7b6743588b-kube-api-access-gb7dh" (OuterVolumeSpecName: "kube-api-access-gb7dh") pod "1c6f6f25-0120-4355-9803-5e7b6743588b" (UID: "1c6f6f25-0120-4355-9803-5e7b6743588b"). InnerVolumeSpecName "kube-api-access-gb7dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.638442 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6f6f25-0120-4355-9803-5e7b6743588b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1c6f6f25-0120-4355-9803-5e7b6743588b" (UID: "1c6f6f25-0120-4355-9803-5e7b6743588b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.640824 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6f6f25-0120-4355-9803-5e7b6743588b-inventory" (OuterVolumeSpecName: "inventory") pod "1c6f6f25-0120-4355-9803-5e7b6743588b" (UID: "1c6f6f25-0120-4355-9803-5e7b6743588b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.698991 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6f6f25-0120-4355-9803-5e7b6743588b-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.699345 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1c6f6f25-0120-4355-9803-5e7b6743588b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.699356 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb7dh\" (UniqueName: \"kubernetes.io/projected/1c6f6f25-0120-4355-9803-5e7b6743588b-kube-api-access-gb7dh\") on node \"crc\" DevicePath \"\"" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.878791 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwhqp" event={"ID":"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60","Type":"ContainerStarted","Data":"8c6d0836ca0a1b43d0fe83cbabfa372ab46709798ddc97ecc8a0bc51e9e02dc8"} Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.880757 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" event={"ID":"1c6f6f25-0120-4355-9803-5e7b6743588b","Type":"ContainerDied","Data":"a8c6edec21b338a43b34654c4e48f8b2187da1ec3c5bac645fe28173d8ff0655"} Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.880806 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8c6edec21b338a43b34654c4e48f8b2187da1ec3c5bac645fe28173d8ff0655" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.880802 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-28hzm" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.918064 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pwhqp" podStartSLOduration=2.401960343 podStartE2EDuration="9.918045793s" podCreationTimestamp="2025-11-27 17:45:41 +0000 UTC" firstStartedPulling="2025-11-27 17:45:42.754521488 +0000 UTC m=+2165.097347806" lastFinishedPulling="2025-11-27 17:45:50.270606928 +0000 UTC m=+2172.613433256" observedRunningTime="2025-11-27 17:45:50.915856618 +0000 UTC m=+2173.258682946" watchObservedRunningTime="2025-11-27 17:45:50.918045793 +0000 UTC m=+2173.260872111" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.960338 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz"] Nov 27 17:45:50 crc kubenswrapper[4792]: E1127 17:45:50.960958 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6f6f25-0120-4355-9803-5e7b6743588b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.960979 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6f6f25-0120-4355-9803-5e7b6743588b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.961233 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6f6f25-0120-4355-9803-5e7b6743588b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.962067 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.966820 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.966935 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.966994 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.967227 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:45:50 crc kubenswrapper[4792]: I1127 17:45:50.972599 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz"] Nov 27 17:45:51 crc kubenswrapper[4792]: I1127 17:45:51.109130 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20b3860e-a914-42cd-b2e7-35ab54507a89-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz\" (UID: \"20b3860e-a914-42cd-b2e7-35ab54507a89\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" Nov 27 17:45:51 crc kubenswrapper[4792]: I1127 17:45:51.109566 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/20b3860e-a914-42cd-b2e7-35ab54507a89-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz\" (UID: \"20b3860e-a914-42cd-b2e7-35ab54507a89\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" Nov 27 17:45:51 crc kubenswrapper[4792]: I1127 17:45:51.109625 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkhh7\" (UniqueName: \"kubernetes.io/projected/20b3860e-a914-42cd-b2e7-35ab54507a89-kube-api-access-mkhh7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz\" (UID: \"20b3860e-a914-42cd-b2e7-35ab54507a89\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" Nov 27 17:45:51 crc kubenswrapper[4792]: I1127 17:45:51.211938 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/20b3860e-a914-42cd-b2e7-35ab54507a89-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz\" (UID: \"20b3860e-a914-42cd-b2e7-35ab54507a89\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" Nov 27 17:45:51 crc kubenswrapper[4792]: I1127 17:45:51.211986 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkhh7\" (UniqueName: \"kubernetes.io/projected/20b3860e-a914-42cd-b2e7-35ab54507a89-kube-api-access-mkhh7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz\" (UID: \"20b3860e-a914-42cd-b2e7-35ab54507a89\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" Nov 27 17:45:51 crc kubenswrapper[4792]: I1127 17:45:51.212047 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20b3860e-a914-42cd-b2e7-35ab54507a89-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz\" (UID: \"20b3860e-a914-42cd-b2e7-35ab54507a89\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" Nov 27 17:45:51 crc kubenswrapper[4792]: I1127 17:45:51.218201 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20b3860e-a914-42cd-b2e7-35ab54507a89-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz\" (UID: \"20b3860e-a914-42cd-b2e7-35ab54507a89\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" Nov 27 17:45:51 crc kubenswrapper[4792]: I1127 17:45:51.222839 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/20b3860e-a914-42cd-b2e7-35ab54507a89-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz\" (UID: \"20b3860e-a914-42cd-b2e7-35ab54507a89\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" Nov 27 17:45:51 crc kubenswrapper[4792]: I1127 17:45:51.228952 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkhh7\" (UniqueName: \"kubernetes.io/projected/20b3860e-a914-42cd-b2e7-35ab54507a89-kube-api-access-mkhh7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz\" (UID: \"20b3860e-a914-42cd-b2e7-35ab54507a89\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" Nov 27 17:45:51 crc kubenswrapper[4792]: I1127 17:45:51.282597 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" Nov 27 17:45:51 crc kubenswrapper[4792]: I1127 17:45:51.554589 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:45:51 crc kubenswrapper[4792]: I1127 17:45:51.554958 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:45:51 crc kubenswrapper[4792]: W1127 17:45:51.906680 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20b3860e_a914_42cd_b2e7_35ab54507a89.slice/crio-d31dba480c924e9ea2b1c9ca8840fb26aad9286ba2f82ed5ab8d6b9f8e7c5fc8 WatchSource:0}: Error finding container d31dba480c924e9ea2b1c9ca8840fb26aad9286ba2f82ed5ab8d6b9f8e7c5fc8: Status 404 returned error can't find the container with id d31dba480c924e9ea2b1c9ca8840fb26aad9286ba2f82ed5ab8d6b9f8e7c5fc8 Nov 27 17:45:51 crc kubenswrapper[4792]: I1127 17:45:51.910090 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz"] Nov 27 17:45:52 crc kubenswrapper[4792]: I1127 17:45:52.620778 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pwhqp" podUID="4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" containerName="registry-server" probeResult="failure" output=< Nov 27 17:45:52 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 17:45:52 crc kubenswrapper[4792]: > Nov 27 17:45:52 crc kubenswrapper[4792]: I1127 17:45:52.900417 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" event={"ID":"20b3860e-a914-42cd-b2e7-35ab54507a89","Type":"ContainerStarted","Data":"0c2a99179462b45db59a3a5729ba8d6af40f0d90bf8213f8bb22b52458f16ced"} Nov 27 17:45:52 crc kubenswrapper[4792]: I1127 17:45:52.900462 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" event={"ID":"20b3860e-a914-42cd-b2e7-35ab54507a89","Type":"ContainerStarted","Data":"d31dba480c924e9ea2b1c9ca8840fb26aad9286ba2f82ed5ab8d6b9f8e7c5fc8"} Nov 27 17:45:54 crc kubenswrapper[4792]: I1127 17:45:54.048487 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" podStartSLOduration=3.571212338 podStartE2EDuration="4.048465605s" podCreationTimestamp="2025-11-27 17:45:50 +0000 UTC" firstStartedPulling="2025-11-27 17:45:51.909740261 +0000 UTC m=+2174.252566579" lastFinishedPulling="2025-11-27 17:45:52.386993528 +0000 UTC m=+2174.729819846" observedRunningTime="2025-11-27 17:45:52.919814066 +0000 UTC m=+2175.262640394" watchObservedRunningTime="2025-11-27 17:45:54.048465605 +0000 UTC m=+2176.391291943" Nov 27 17:45:54 crc kubenswrapper[4792]: I1127 17:45:54.052490 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-44cd2"] Nov 27 17:45:54 crc kubenswrapper[4792]: I1127 17:45:54.070176 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-44cd2"] Nov 27 17:45:54 crc kubenswrapper[4792]: I1127 17:45:54.706282 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e297bad-8615-4fcd-a43a-4ef82af97714" path="/var/lib/kubelet/pods/7e297bad-8615-4fcd-a43a-4ef82af97714/volumes" Nov 27 17:46:01 crc kubenswrapper[4792]: I1127 17:46:01.611868 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:46:01 crc kubenswrapper[4792]: I1127 17:46:01.691438 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:46:01 crc kubenswrapper[4792]: I1127 17:46:01.861130 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwhqp"] Nov 27 17:46:03 crc kubenswrapper[4792]: I1127 17:46:03.018083 4792 generic.go:334] "Generic (PLEG): container finished" podID="20b3860e-a914-42cd-b2e7-35ab54507a89" containerID="0c2a99179462b45db59a3a5729ba8d6af40f0d90bf8213f8bb22b52458f16ced" exitCode=0 Nov 27 17:46:03 crc kubenswrapper[4792]: I1127 17:46:03.018206 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" event={"ID":"20b3860e-a914-42cd-b2e7-35ab54507a89","Type":"ContainerDied","Data":"0c2a99179462b45db59a3a5729ba8d6af40f0d90bf8213f8bb22b52458f16ced"} Nov 27 17:46:03 crc kubenswrapper[4792]: I1127 17:46:03.018939 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pwhqp" podUID="4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" containerName="registry-server" containerID="cri-o://8c6d0836ca0a1b43d0fe83cbabfa372ab46709798ddc97ecc8a0bc51e9e02dc8" gracePeriod=2 Nov 27 17:46:03 crc kubenswrapper[4792]: I1127 17:46:03.596823 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:46:03 crc kubenswrapper[4792]: I1127 17:46:03.710137 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wn5s\" (UniqueName: \"kubernetes.io/projected/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-kube-api-access-9wn5s\") pod \"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60\" (UID: \"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60\") " Nov 27 17:46:03 crc kubenswrapper[4792]: I1127 17:46:03.710175 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-utilities\") pod \"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60\" (UID: \"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60\") " Nov 27 17:46:03 crc kubenswrapper[4792]: I1127 17:46:03.710246 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-catalog-content\") pod \"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60\" (UID: \"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60\") " Nov 27 17:46:03 crc kubenswrapper[4792]: I1127 17:46:03.711671 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-utilities" (OuterVolumeSpecName: "utilities") pod "4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" (UID: "4c69321a-8f4d-46b6-adc6-c5dcd89ddc60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:46:03 crc kubenswrapper[4792]: I1127 17:46:03.725039 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-kube-api-access-9wn5s" (OuterVolumeSpecName: "kube-api-access-9wn5s") pod "4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" (UID: "4c69321a-8f4d-46b6-adc6-c5dcd89ddc60"). InnerVolumeSpecName "kube-api-access-9wn5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:46:03 crc kubenswrapper[4792]: I1127 17:46:03.813038 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" (UID: "4c69321a-8f4d-46b6-adc6-c5dcd89ddc60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:46:03 crc kubenswrapper[4792]: I1127 17:46:03.819433 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wn5s\" (UniqueName: \"kubernetes.io/projected/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-kube-api-access-9wn5s\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:03 crc kubenswrapper[4792]: I1127 17:46:03.819729 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:03 crc kubenswrapper[4792]: I1127 17:46:03.819743 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.035801 4792 generic.go:334] "Generic (PLEG): container finished" podID="4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" containerID="8c6d0836ca0a1b43d0fe83cbabfa372ab46709798ddc97ecc8a0bc51e9e02dc8" exitCode=0 Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.035873 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwhqp" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.035885 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwhqp" event={"ID":"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60","Type":"ContainerDied","Data":"8c6d0836ca0a1b43d0fe83cbabfa372ab46709798ddc97ecc8a0bc51e9e02dc8"} Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.035944 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwhqp" event={"ID":"4c69321a-8f4d-46b6-adc6-c5dcd89ddc60","Type":"ContainerDied","Data":"3fad23cd405536b2bd8cc5ce2c384517196fdf97784fda0882630e747aa4d904"} Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.035970 4792 scope.go:117] "RemoveContainer" containerID="8c6d0836ca0a1b43d0fe83cbabfa372ab46709798ddc97ecc8a0bc51e9e02dc8" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.084398 4792 scope.go:117] "RemoveContainer" containerID="971a8f4f765c20d0ac36244f3fe2a448674d0f13efecde148f97bccf3d01eea0" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.091732 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwhqp"] Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.104739 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pwhqp"] Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.119569 4792 scope.go:117] "RemoveContainer" containerID="f717ace21416662d4eba639fd4e7e0e0ff8db51be0b171b8276802059fedf257" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.195333 4792 scope.go:117] "RemoveContainer" containerID="8c6d0836ca0a1b43d0fe83cbabfa372ab46709798ddc97ecc8a0bc51e9e02dc8" Nov 27 17:46:04 crc kubenswrapper[4792]: E1127 17:46:04.196270 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c6d0836ca0a1b43d0fe83cbabfa372ab46709798ddc97ecc8a0bc51e9e02dc8\": container with ID starting with 8c6d0836ca0a1b43d0fe83cbabfa372ab46709798ddc97ecc8a0bc51e9e02dc8 not found: ID does not exist" containerID="8c6d0836ca0a1b43d0fe83cbabfa372ab46709798ddc97ecc8a0bc51e9e02dc8" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.196314 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6d0836ca0a1b43d0fe83cbabfa372ab46709798ddc97ecc8a0bc51e9e02dc8"} err="failed to get container status \"8c6d0836ca0a1b43d0fe83cbabfa372ab46709798ddc97ecc8a0bc51e9e02dc8\": rpc error: code = NotFound desc = could not find container \"8c6d0836ca0a1b43d0fe83cbabfa372ab46709798ddc97ecc8a0bc51e9e02dc8\": container with ID starting with 8c6d0836ca0a1b43d0fe83cbabfa372ab46709798ddc97ecc8a0bc51e9e02dc8 not found: ID does not exist" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.196342 4792 scope.go:117] "RemoveContainer" containerID="971a8f4f765c20d0ac36244f3fe2a448674d0f13efecde148f97bccf3d01eea0" Nov 27 17:46:04 crc kubenswrapper[4792]: E1127 17:46:04.196735 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"971a8f4f765c20d0ac36244f3fe2a448674d0f13efecde148f97bccf3d01eea0\": container with ID starting with 971a8f4f765c20d0ac36244f3fe2a448674d0f13efecde148f97bccf3d01eea0 not found: ID does not exist" containerID="971a8f4f765c20d0ac36244f3fe2a448674d0f13efecde148f97bccf3d01eea0" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.196765 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"971a8f4f765c20d0ac36244f3fe2a448674d0f13efecde148f97bccf3d01eea0"} err="failed to get container status \"971a8f4f765c20d0ac36244f3fe2a448674d0f13efecde148f97bccf3d01eea0\": rpc error: code = NotFound desc = could not find container \"971a8f4f765c20d0ac36244f3fe2a448674d0f13efecde148f97bccf3d01eea0\": container with ID starting with 971a8f4f765c20d0ac36244f3fe2a448674d0f13efecde148f97bccf3d01eea0 not found: ID does not exist" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.196787 4792 scope.go:117] "RemoveContainer" containerID="f717ace21416662d4eba639fd4e7e0e0ff8db51be0b171b8276802059fedf257" Nov 27 17:46:04 crc kubenswrapper[4792]: E1127 17:46:04.197069 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f717ace21416662d4eba639fd4e7e0e0ff8db51be0b171b8276802059fedf257\": container with ID starting with f717ace21416662d4eba639fd4e7e0e0ff8db51be0b171b8276802059fedf257 not found: ID does not exist" containerID="f717ace21416662d4eba639fd4e7e0e0ff8db51be0b171b8276802059fedf257" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.197093 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f717ace21416662d4eba639fd4e7e0e0ff8db51be0b171b8276802059fedf257"} err="failed to get container status \"f717ace21416662d4eba639fd4e7e0e0ff8db51be0b171b8276802059fedf257\": rpc error: code = NotFound desc = could not find container \"f717ace21416662d4eba639fd4e7e0e0ff8db51be0b171b8276802059fedf257\": container with ID starting with f717ace21416662d4eba639fd4e7e0e0ff8db51be0b171b8276802059fedf257 not found: ID does not exist" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.627733 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.696235 4792 scope.go:117] "RemoveContainer" containerID="a2afa8b8f89b8d445d72e5faae1ddf4353f83f9942c96979b0bd2313dc3184cd" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.710174 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" path="/var/lib/kubelet/pods/4c69321a-8f4d-46b6-adc6-c5dcd89ddc60/volumes" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.733381 4792 scope.go:117] "RemoveContainer" containerID="8c1e59b9826ef4d622534d5d6a0ce641b6405fceb3b61bfc99920b1db5bff0a3" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.738749 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/20b3860e-a914-42cd-b2e7-35ab54507a89-ssh-key\") pod \"20b3860e-a914-42cd-b2e7-35ab54507a89\" (UID: \"20b3860e-a914-42cd-b2e7-35ab54507a89\") " Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.738824 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkhh7\" (UniqueName: \"kubernetes.io/projected/20b3860e-a914-42cd-b2e7-35ab54507a89-kube-api-access-mkhh7\") pod \"20b3860e-a914-42cd-b2e7-35ab54507a89\" (UID: \"20b3860e-a914-42cd-b2e7-35ab54507a89\") " Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.739183 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20b3860e-a914-42cd-b2e7-35ab54507a89-inventory\") pod \"20b3860e-a914-42cd-b2e7-35ab54507a89\" (UID: \"20b3860e-a914-42cd-b2e7-35ab54507a89\") " Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.744791 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b3860e-a914-42cd-b2e7-35ab54507a89-kube-api-access-mkhh7" (OuterVolumeSpecName: "kube-api-access-mkhh7") pod "20b3860e-a914-42cd-b2e7-35ab54507a89" (UID: "20b3860e-a914-42cd-b2e7-35ab54507a89"). InnerVolumeSpecName "kube-api-access-mkhh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.770879 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b3860e-a914-42cd-b2e7-35ab54507a89-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "20b3860e-a914-42cd-b2e7-35ab54507a89" (UID: "20b3860e-a914-42cd-b2e7-35ab54507a89"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.784717 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b3860e-a914-42cd-b2e7-35ab54507a89-inventory" (OuterVolumeSpecName: "inventory") pod "20b3860e-a914-42cd-b2e7-35ab54507a89" (UID: "20b3860e-a914-42cd-b2e7-35ab54507a89"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.842828 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/20b3860e-a914-42cd-b2e7-35ab54507a89-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.843777 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/20b3860e-a914-42cd-b2e7-35ab54507a89-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:04 crc kubenswrapper[4792]: I1127 17:46:04.843799 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkhh7\" (UniqueName: \"kubernetes.io/projected/20b3860e-a914-42cd-b2e7-35ab54507a89-kube-api-access-mkhh7\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.112103 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.111855 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz" event={"ID":"20b3860e-a914-42cd-b2e7-35ab54507a89","Type":"ContainerDied","Data":"d31dba480c924e9ea2b1c9ca8840fb26aad9286ba2f82ed5ab8d6b9f8e7c5fc8"} Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.112323 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d31dba480c924e9ea2b1c9ca8840fb26aad9286ba2f82ed5ab8d6b9f8e7c5fc8" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.183756 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj"] Nov 27 17:46:05 crc kubenswrapper[4792]: E1127 17:46:05.184502 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" containerName="extract-content" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.184517 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" containerName="extract-content" Nov 27 17:46:05 crc kubenswrapper[4792]: E1127 17:46:05.184547 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" containerName="registry-server" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.184554 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" containerName="registry-server" Nov 27 17:46:05 crc kubenswrapper[4792]: E1127 17:46:05.184567 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" containerName="extract-utilities" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.184574 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" containerName="extract-utilities" Nov 27 17:46:05 crc kubenswrapper[4792]: E1127 17:46:05.184581 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b3860e-a914-42cd-b2e7-35ab54507a89" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.184587 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b3860e-a914-42cd-b2e7-35ab54507a89" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.185436 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c69321a-8f4d-46b6-adc6-c5dcd89ddc60" containerName="registry-server" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.185479 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b3860e-a914-42cd-b2e7-35ab54507a89" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.186286 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.190377 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.190598 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.190809 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.191233 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.191395 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.191592 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.191798 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.192821 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.196307 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.218177 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj"] Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.263258 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.263324 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.263470 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.263696 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.263753 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.263876 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.263930 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.263977 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.264142 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.264189 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.264246 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.264415 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h57vv\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-kube-api-access-h57vv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.264530 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.264557 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.264754 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.264913 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.367872 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.368004 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.368073 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.368202 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.368259 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.368350 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.368416 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.368483 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.368589 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.368695 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.368752 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.368834 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h57vv\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-kube-api-access-h57vv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.368927 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.368989 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.369079 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.369167 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.375333 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.376993 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.377237 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.378802 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.379044 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.379330 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.379908 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.379920 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.380419 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.384429 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.384689 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.387432 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.388280 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.388961 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.389430 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.401344 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h57vv\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-kube-api-access-h57vv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:05 crc kubenswrapper[4792]: I1127 17:46:05.543356 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:06 crc kubenswrapper[4792]: I1127 17:46:06.123177 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj"] Nov 27 17:46:07 crc kubenswrapper[4792]: I1127 17:46:07.146525 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" event={"ID":"95dfa4b6-84cb-439b-a9ff-fbe5b048973e","Type":"ContainerStarted","Data":"294c7ea4e8d979fa587d80a948d51c7ed2d169fc903dc9e31123e97581fab046"} Nov 27 17:46:07 crc kubenswrapper[4792]: I1127 17:46:07.147263 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" event={"ID":"95dfa4b6-84cb-439b-a9ff-fbe5b048973e","Type":"ContainerStarted","Data":"afb2953c5bdc4e2d2845535e0c86e0efe33f826183da63109643b002ba2a08a2"} Nov 27 17:46:08 crc kubenswrapper[4792]: I1127 17:46:08.290808 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:46:08 crc kubenswrapper[4792]: I1127 17:46:08.291186 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:46:12 crc kubenswrapper[4792]: I1127 17:46:12.338174 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" podStartSLOduration=6.720353898 podStartE2EDuration="7.338140737s" podCreationTimestamp="2025-11-27 17:46:05 +0000 UTC" firstStartedPulling="2025-11-27 17:46:06.131369485 +0000 UTC m=+2188.474195803" lastFinishedPulling="2025-11-27 17:46:06.749156314 +0000 UTC m=+2189.091982642" observedRunningTime="2025-11-27 17:46:07.179138217 +0000 UTC m=+2189.521964555" watchObservedRunningTime="2025-11-27 17:46:12.338140737 +0000 UTC m=+2194.680967095" Nov 27 17:46:12 crc kubenswrapper[4792]: I1127 17:46:12.340780 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qflvn"] Nov 27 17:46:12 crc kubenswrapper[4792]: I1127 17:46:12.344907 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:12 crc kubenswrapper[4792]: I1127 17:46:12.358977 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qflvn"] Nov 27 17:46:12 crc kubenswrapper[4792]: I1127 17:46:12.375542 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-utilities\") pod \"redhat-marketplace-qflvn\" (UID: \"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e\") " pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:12 crc kubenswrapper[4792]: I1127 17:46:12.375856 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-catalog-content\") pod \"redhat-marketplace-qflvn\" (UID: \"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e\") " pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:12 crc kubenswrapper[4792]: I1127 17:46:12.375926 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f8sw\" (UniqueName: \"kubernetes.io/projected/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-kube-api-access-5f8sw\") pod \"redhat-marketplace-qflvn\" (UID: \"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e\") " pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:12 crc kubenswrapper[4792]: I1127 17:46:12.478185 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-catalog-content\") pod \"redhat-marketplace-qflvn\" (UID: \"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e\") " pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:12 crc kubenswrapper[4792]: I1127 17:46:12.478283 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f8sw\" (UniqueName: \"kubernetes.io/projected/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-kube-api-access-5f8sw\") pod \"redhat-marketplace-qflvn\" (UID: \"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e\") " pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:12 crc kubenswrapper[4792]: I1127 17:46:12.478343 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-utilities\") pod \"redhat-marketplace-qflvn\" (UID: \"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e\") " pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:12 crc kubenswrapper[4792]: I1127 17:46:12.478890 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-catalog-content\") pod \"redhat-marketplace-qflvn\" (UID: \"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e\") " pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:12 crc kubenswrapper[4792]: I1127 17:46:12.478908 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-utilities\") pod \"redhat-marketplace-qflvn\" (UID: \"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e\") " pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:12 crc kubenswrapper[4792]: I1127 17:46:12.501294 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f8sw\" (UniqueName: \"kubernetes.io/projected/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-kube-api-access-5f8sw\") pod \"redhat-marketplace-qflvn\" (UID: \"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e\") " pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:12 crc kubenswrapper[4792]: I1127 17:46:12.713475 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:13 crc kubenswrapper[4792]: I1127 17:46:13.209610 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qflvn"] Nov 27 17:46:13 crc kubenswrapper[4792]: I1127 17:46:13.252905 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qflvn" event={"ID":"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e","Type":"ContainerStarted","Data":"a25208d55e2d59f20d804cb50ceae74de6ac60084c1a29f0110cc3a3c5a427b5"} Nov 27 17:46:14 crc kubenswrapper[4792]: I1127 17:46:14.268024 4792 generic.go:334] "Generic (PLEG): container finished" podID="abd2acdc-63f2-4d7d-a01f-96ad981b6c2e" containerID="4e123c8688bd4d7b2aea71eea8bc691cbac80c07b0cc70d22c8118f456189173" exitCode=0 Nov 27 17:46:14 crc kubenswrapper[4792]: I1127 17:46:14.268112 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qflvn" event={"ID":"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e","Type":"ContainerDied","Data":"4e123c8688bd4d7b2aea71eea8bc691cbac80c07b0cc70d22c8118f456189173"} Nov 27 17:46:16 crc kubenswrapper[4792]: I1127 17:46:16.290121 4792 generic.go:334] "Generic (PLEG): container finished" podID="abd2acdc-63f2-4d7d-a01f-96ad981b6c2e" containerID="c988f5f669c2e924ff11e81aa0b5d3f4958f4899677839543152e8afaa432460" exitCode=0 Nov 27 17:46:16 crc kubenswrapper[4792]: I1127 17:46:16.290194 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qflvn" event={"ID":"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e","Type":"ContainerDied","Data":"c988f5f669c2e924ff11e81aa0b5d3f4958f4899677839543152e8afaa432460"} Nov 27 17:46:17 crc kubenswrapper[4792]: I1127 17:46:17.312636 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qflvn" event={"ID":"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e","Type":"ContainerStarted","Data":"d774bee51c64e68453c5e4100bce4590afcefe3ff6c5f38ba8e75409f955c8aa"} Nov 27 17:46:17 crc kubenswrapper[4792]: I1127 17:46:17.339111 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qflvn" podStartSLOduration=2.855245814 podStartE2EDuration="5.339087492s" podCreationTimestamp="2025-11-27 17:46:12 +0000 UTC" firstStartedPulling="2025-11-27 17:46:14.271764907 +0000 UTC m=+2196.614591275" lastFinishedPulling="2025-11-27 17:46:16.755606625 +0000 UTC m=+2199.098432953" observedRunningTime="2025-11-27 17:46:17.330385715 +0000 UTC m=+2199.673212033" watchObservedRunningTime="2025-11-27 17:46:17.339087492 +0000 UTC m=+2199.681913810" Nov 27 17:46:22 crc kubenswrapper[4792]: I1127 17:46:22.714604 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:22 crc kubenswrapper[4792]: I1127 17:46:22.715294 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:22 crc kubenswrapper[4792]: I1127 17:46:22.803162 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:23 crc kubenswrapper[4792]: I1127 17:46:23.442573 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:23 crc kubenswrapper[4792]: I1127 17:46:23.513549 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qflvn"] Nov 27 17:46:25 crc kubenswrapper[4792]: I1127 17:46:25.408670 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qflvn" podUID="abd2acdc-63f2-4d7d-a01f-96ad981b6c2e" containerName="registry-server" containerID="cri-o://d774bee51c64e68453c5e4100bce4590afcefe3ff6c5f38ba8e75409f955c8aa" gracePeriod=2 Nov 27 17:46:25 crc kubenswrapper[4792]: I1127 17:46:25.955590 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.106621 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f8sw\" (UniqueName: \"kubernetes.io/projected/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-kube-api-access-5f8sw\") pod \"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e\" (UID: \"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e\") " Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.106814 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-utilities\") pod \"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e\" (UID: \"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e\") " Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.106973 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-catalog-content\") pod \"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e\" (UID: \"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e\") " Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.107978 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-utilities" (OuterVolumeSpecName: "utilities") pod "abd2acdc-63f2-4d7d-a01f-96ad981b6c2e" (UID: "abd2acdc-63f2-4d7d-a01f-96ad981b6c2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.115304 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-kube-api-access-5f8sw" (OuterVolumeSpecName: "kube-api-access-5f8sw") pod "abd2acdc-63f2-4d7d-a01f-96ad981b6c2e" (UID: "abd2acdc-63f2-4d7d-a01f-96ad981b6c2e"). InnerVolumeSpecName "kube-api-access-5f8sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.125603 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abd2acdc-63f2-4d7d-a01f-96ad981b6c2e" (UID: "abd2acdc-63f2-4d7d-a01f-96ad981b6c2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.210582 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f8sw\" (UniqueName: \"kubernetes.io/projected/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-kube-api-access-5f8sw\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.210627 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.210639 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.424919 4792 generic.go:334] "Generic (PLEG): container finished" podID="abd2acdc-63f2-4d7d-a01f-96ad981b6c2e" containerID="d774bee51c64e68453c5e4100bce4590afcefe3ff6c5f38ba8e75409f955c8aa" exitCode=0 Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.424973 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qflvn" event={"ID":"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e","Type":"ContainerDied","Data":"d774bee51c64e68453c5e4100bce4590afcefe3ff6c5f38ba8e75409f955c8aa"} Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.425003 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qflvn" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.425048 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qflvn" event={"ID":"abd2acdc-63f2-4d7d-a01f-96ad981b6c2e","Type":"ContainerDied","Data":"a25208d55e2d59f20d804cb50ceae74de6ac60084c1a29f0110cc3a3c5a427b5"} Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.425069 4792 scope.go:117] "RemoveContainer" containerID="d774bee51c64e68453c5e4100bce4590afcefe3ff6c5f38ba8e75409f955c8aa" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.462112 4792 scope.go:117] "RemoveContainer" containerID="c988f5f669c2e924ff11e81aa0b5d3f4958f4899677839543152e8afaa432460" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.462697 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qflvn"] Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.477362 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qflvn"] Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.485968 4792 scope.go:117] "RemoveContainer" containerID="4e123c8688bd4d7b2aea71eea8bc691cbac80c07b0cc70d22c8118f456189173" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.562670 4792 scope.go:117] "RemoveContainer" containerID="d774bee51c64e68453c5e4100bce4590afcefe3ff6c5f38ba8e75409f955c8aa" Nov 27 17:46:26 crc kubenswrapper[4792]: E1127 17:46:26.563514 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d774bee51c64e68453c5e4100bce4590afcefe3ff6c5f38ba8e75409f955c8aa\": container with ID starting with d774bee51c64e68453c5e4100bce4590afcefe3ff6c5f38ba8e75409f955c8aa not found: ID does not exist" containerID="d774bee51c64e68453c5e4100bce4590afcefe3ff6c5f38ba8e75409f955c8aa" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.563553 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d774bee51c64e68453c5e4100bce4590afcefe3ff6c5f38ba8e75409f955c8aa"} err="failed to get container status \"d774bee51c64e68453c5e4100bce4590afcefe3ff6c5f38ba8e75409f955c8aa\": rpc error: code = NotFound desc = could not find container \"d774bee51c64e68453c5e4100bce4590afcefe3ff6c5f38ba8e75409f955c8aa\": container with ID starting with d774bee51c64e68453c5e4100bce4590afcefe3ff6c5f38ba8e75409f955c8aa not found: ID does not exist" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.563577 4792 scope.go:117] "RemoveContainer" containerID="c988f5f669c2e924ff11e81aa0b5d3f4958f4899677839543152e8afaa432460" Nov 27 17:46:26 crc kubenswrapper[4792]: E1127 17:46:26.564183 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c988f5f669c2e924ff11e81aa0b5d3f4958f4899677839543152e8afaa432460\": container with ID starting with c988f5f669c2e924ff11e81aa0b5d3f4958f4899677839543152e8afaa432460 not found: ID does not exist" containerID="c988f5f669c2e924ff11e81aa0b5d3f4958f4899677839543152e8afaa432460" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.564229 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c988f5f669c2e924ff11e81aa0b5d3f4958f4899677839543152e8afaa432460"} err="failed to get container status \"c988f5f669c2e924ff11e81aa0b5d3f4958f4899677839543152e8afaa432460\": rpc error: code = NotFound desc = could not find container \"c988f5f669c2e924ff11e81aa0b5d3f4958f4899677839543152e8afaa432460\": container with ID starting with c988f5f669c2e924ff11e81aa0b5d3f4958f4899677839543152e8afaa432460 not found: ID does not exist" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.564267 4792 scope.go:117] "RemoveContainer" containerID="4e123c8688bd4d7b2aea71eea8bc691cbac80c07b0cc70d22c8118f456189173" Nov 27 17:46:26 crc kubenswrapper[4792]: E1127 17:46:26.564573 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e123c8688bd4d7b2aea71eea8bc691cbac80c07b0cc70d22c8118f456189173\": container with ID starting with 4e123c8688bd4d7b2aea71eea8bc691cbac80c07b0cc70d22c8118f456189173 not found: ID does not exist" containerID="4e123c8688bd4d7b2aea71eea8bc691cbac80c07b0cc70d22c8118f456189173" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.564599 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e123c8688bd4d7b2aea71eea8bc691cbac80c07b0cc70d22c8118f456189173"} err="failed to get container status \"4e123c8688bd4d7b2aea71eea8bc691cbac80c07b0cc70d22c8118f456189173\": rpc error: code = NotFound desc = could not find container \"4e123c8688bd4d7b2aea71eea8bc691cbac80c07b0cc70d22c8118f456189173\": container with ID starting with 4e123c8688bd4d7b2aea71eea8bc691cbac80c07b0cc70d22c8118f456189173 not found: ID does not exist" Nov 27 17:46:26 crc kubenswrapper[4792]: I1127 17:46:26.710125 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abd2acdc-63f2-4d7d-a01f-96ad981b6c2e" path="/var/lib/kubelet/pods/abd2acdc-63f2-4d7d-a01f-96ad981b6c2e/volumes" Nov 27 17:46:38 crc kubenswrapper[4792]: I1127 17:46:38.290798 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:46:38 crc kubenswrapper[4792]: I1127 17:46:38.291295 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:46:38 crc kubenswrapper[4792]: I1127 17:46:38.291340 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:46:38 crc kubenswrapper[4792]: I1127 17:46:38.292230 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 17:46:38 crc kubenswrapper[4792]: I1127 17:46:38.292294 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" gracePeriod=600 Nov 27 17:46:38 crc kubenswrapper[4792]: E1127 17:46:38.420666 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:46:39 crc kubenswrapper[4792]: I1127 17:46:39.038686 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" exitCode=0 Nov 27 17:46:39 crc kubenswrapper[4792]: I1127 17:46:39.038754 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d"} Nov 27 17:46:39 crc kubenswrapper[4792]: I1127 17:46:39.039120 4792 scope.go:117] "RemoveContainer" containerID="8a39d10190b610649d44c98afe3563275dbc74e7d629b3db59b3d9af2418ae45" Nov 27 17:46:39 crc kubenswrapper[4792]: I1127 17:46:39.040131 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:46:39 crc kubenswrapper[4792]: E1127 17:46:39.040869 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:46:42 crc kubenswrapper[4792]: I1127 17:46:42.082994 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-8p9hr"] Nov 27 17:46:42 crc kubenswrapper[4792]: I1127 17:46:42.100681 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-8p9hr"] Nov 27 17:46:42 crc kubenswrapper[4792]: I1127 17:46:42.705899 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798770c9-f0ca-4e64-834f-c7ae9156c93f" path="/var/lib/kubelet/pods/798770c9-f0ca-4e64-834f-c7ae9156c93f/volumes" Nov 27 17:46:51 crc kubenswrapper[4792]: I1127 17:46:51.686925 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:46:51 crc kubenswrapper[4792]: E1127 17:46:51.688200 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:46:56 crc kubenswrapper[4792]: I1127 17:46:56.226725 4792 generic.go:334] "Generic (PLEG): container finished" podID="95dfa4b6-84cb-439b-a9ff-fbe5b048973e" containerID="294c7ea4e8d979fa587d80a948d51c7ed2d169fc903dc9e31123e97581fab046" exitCode=0 Nov 27 17:46:56 crc kubenswrapper[4792]: I1127 17:46:56.226857 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" event={"ID":"95dfa4b6-84cb-439b-a9ff-fbe5b048973e","Type":"ContainerDied","Data":"294c7ea4e8d979fa587d80a948d51c7ed2d169fc903dc9e31123e97581fab046"} Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.758610 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.910430 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.910505 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.910542 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-telemetry-power-monitoring-combined-ca-bundle\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.910591 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h57vv\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-kube-api-access-h57vv\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.910656 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.910688 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.910770 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-ssh-key\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.910825 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.910850 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-nova-combined-ca-bundle\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.910903 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-ovn-combined-ca-bundle\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.910974 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-telemetry-combined-ca-bundle\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.911018 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-inventory\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.928776 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-libvirt-combined-ca-bundle\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.928821 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-neutron-metadata-combined-ca-bundle\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.928839 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-repo-setup-combined-ca-bundle\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.928884 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-bootstrap-combined-ca-bundle\") pod \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\" (UID: \"95dfa4b6-84cb-439b-a9ff-fbe5b048973e\") " Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.959933 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.960251 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.960311 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:46:57 crc kubenswrapper[4792]: I1127 17:46:57.960402 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.028863 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.035297 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.035563 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.035573 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.035585 4792 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.035597 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.035482 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.036299 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.037309 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-kube-api-access-h57vv" (OuterVolumeSpecName: "kube-api-access-h57vv") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "kube-api-access-h57vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.037734 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.039090 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.039600 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.047813 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.048975 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.052834 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.069541 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.094608 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-inventory" (OuterVolumeSpecName: "inventory") pod "95dfa4b6-84cb-439b-a9ff-fbe5b048973e" (UID: "95dfa4b6-84cb-439b-a9ff-fbe5b048973e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.138586 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.138626 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.138659 4792 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.138675 4792 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.138688 4792 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.138700 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.138712 4792 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.138724 4792 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.138738 4792 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.138750 4792 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.138762 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h57vv\" (UniqueName: \"kubernetes.io/projected/95dfa4b6-84cb-439b-a9ff-fbe5b048973e-kube-api-access-h57vv\") on node \"crc\" DevicePath \"\"" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.251500 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" event={"ID":"95dfa4b6-84cb-439b-a9ff-fbe5b048973e","Type":"ContainerDied","Data":"afb2953c5bdc4e2d2845535e0c86e0efe33f826183da63109643b002ba2a08a2"} Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.251539 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afb2953c5bdc4e2d2845535e0c86e0efe33f826183da63109643b002ba2a08a2" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.251559 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.380663 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq"] Nov 27 17:46:58 crc kubenswrapper[4792]: E1127 17:46:58.381235 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd2acdc-63f2-4d7d-a01f-96ad981b6c2e" containerName="registry-server" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.381255 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd2acdc-63f2-4d7d-a01f-96ad981b6c2e" containerName="registry-server" Nov 27 17:46:58 crc kubenswrapper[4792]: E1127 17:46:58.381277 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd2acdc-63f2-4d7d-a01f-96ad981b6c2e" containerName="extract-content" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.381285 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd2acdc-63f2-4d7d-a01f-96ad981b6c2e" containerName="extract-content" Nov 27 17:46:58 crc kubenswrapper[4792]: E1127 17:46:58.381307 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95dfa4b6-84cb-439b-a9ff-fbe5b048973e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.381315 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="95dfa4b6-84cb-439b-a9ff-fbe5b048973e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 27 17:46:58 crc kubenswrapper[4792]: E1127 17:46:58.381341 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abd2acdc-63f2-4d7d-a01f-96ad981b6c2e" containerName="extract-utilities" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.381351 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="abd2acdc-63f2-4d7d-a01f-96ad981b6c2e" containerName="extract-utilities" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.381678 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="95dfa4b6-84cb-439b-a9ff-fbe5b048973e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.381716 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="abd2acdc-63f2-4d7d-a01f-96ad981b6c2e" containerName="registry-server" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.382879 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.389122 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.389438 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.389585 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.389780 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.389936 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.391556 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq"] Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.444722 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jn9hq\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.444790 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a8b213a4-d6e2-4ed9-b67b-625fab313079-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jn9hq\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.444938 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jn9hq\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.444975 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jn9hq\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.445010 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t7hs\" (UniqueName: \"kubernetes.io/projected/a8b213a4-d6e2-4ed9-b67b-625fab313079-kube-api-access-5t7hs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jn9hq\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.547137 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jn9hq\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.547193 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a8b213a4-d6e2-4ed9-b67b-625fab313079-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jn9hq\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.547283 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jn9hq\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.547309 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jn9hq\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.547335 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t7hs\" (UniqueName: \"kubernetes.io/projected/a8b213a4-d6e2-4ed9-b67b-625fab313079-kube-api-access-5t7hs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jn9hq\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.548266 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a8b213a4-d6e2-4ed9-b67b-625fab313079-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jn9hq\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.552380 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jn9hq\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.552463 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jn9hq\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.554451 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jn9hq\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.565186 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t7hs\" (UniqueName: \"kubernetes.io/projected/a8b213a4-d6e2-4ed9-b67b-625fab313079-kube-api-access-5t7hs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jn9hq\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:58 crc kubenswrapper[4792]: I1127 17:46:58.710391 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:46:59 crc kubenswrapper[4792]: I1127 17:46:59.334013 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq"] Nov 27 17:47:00 crc kubenswrapper[4792]: I1127 17:47:00.276118 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" event={"ID":"a8b213a4-d6e2-4ed9-b67b-625fab313079","Type":"ContainerStarted","Data":"61391f570df26e1bb3bbca3b135260757abd7c67610e992daf5c9fcf88edfcf4"} Nov 27 17:47:00 crc kubenswrapper[4792]: I1127 17:47:00.276845 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" event={"ID":"a8b213a4-d6e2-4ed9-b67b-625fab313079","Type":"ContainerStarted","Data":"999fcb2b73c7ca77ac71e5bcb4c3deece10c87af45c2512ea610ab8c115c5647"} Nov 27 17:47:00 crc kubenswrapper[4792]: I1127 17:47:00.304075 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" podStartSLOduration=1.718989773 podStartE2EDuration="2.304055589s" podCreationTimestamp="2025-11-27 17:46:58 +0000 UTC" firstStartedPulling="2025-11-27 17:46:59.339396322 +0000 UTC m=+2241.682222650" lastFinishedPulling="2025-11-27 17:46:59.924462148 +0000 UTC m=+2242.267288466" observedRunningTime="2025-11-27 17:47:00.289001195 +0000 UTC m=+2242.631827513" watchObservedRunningTime="2025-11-27 17:47:00.304055589 +0000 UTC m=+2242.646881907" Nov 27 17:47:04 crc kubenswrapper[4792]: I1127 17:47:04.930204 4792 scope.go:117] "RemoveContainer" containerID="6b01ea71f9ada9af9f6f1437fe826461e0997751700ad581ec6b642b2cadedaa" Nov 27 17:47:05 crc kubenswrapper[4792]: I1127 17:47:05.687200 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:47:05 crc kubenswrapper[4792]: E1127 17:47:05.687946 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:47:17 crc kubenswrapper[4792]: I1127 17:47:17.687722 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:47:17 crc kubenswrapper[4792]: E1127 17:47:17.690888 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:47:32 crc kubenswrapper[4792]: I1127 17:47:32.686769 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:47:32 crc kubenswrapper[4792]: E1127 17:47:32.687699 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:47:47 crc kubenswrapper[4792]: I1127 17:47:47.687828 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:47:47 crc kubenswrapper[4792]: E1127 17:47:47.689126 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:48:01 crc kubenswrapper[4792]: I1127 17:48:01.687489 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:48:01 crc kubenswrapper[4792]: E1127 17:48:01.688765 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:48:05 crc kubenswrapper[4792]: I1127 17:48:05.103291 4792 generic.go:334] "Generic (PLEG): container finished" podID="a8b213a4-d6e2-4ed9-b67b-625fab313079" containerID="61391f570df26e1bb3bbca3b135260757abd7c67610e992daf5c9fcf88edfcf4" exitCode=0 Nov 27 17:48:05 crc kubenswrapper[4792]: I1127 17:48:05.103796 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" event={"ID":"a8b213a4-d6e2-4ed9-b67b-625fab313079","Type":"ContainerDied","Data":"61391f570df26e1bb3bbca3b135260757abd7c67610e992daf5c9fcf88edfcf4"} Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.712075 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.789370 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-inventory\") pod \"a8b213a4-d6e2-4ed9-b67b-625fab313079\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.789845 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-ovn-combined-ca-bundle\") pod \"a8b213a4-d6e2-4ed9-b67b-625fab313079\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.789991 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-ssh-key\") pod \"a8b213a4-d6e2-4ed9-b67b-625fab313079\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.790029 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a8b213a4-d6e2-4ed9-b67b-625fab313079-ovncontroller-config-0\") pod \"a8b213a4-d6e2-4ed9-b67b-625fab313079\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.790184 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t7hs\" (UniqueName: \"kubernetes.io/projected/a8b213a4-d6e2-4ed9-b67b-625fab313079-kube-api-access-5t7hs\") pod \"a8b213a4-d6e2-4ed9-b67b-625fab313079\" (UID: \"a8b213a4-d6e2-4ed9-b67b-625fab313079\") " Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.795883 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b213a4-d6e2-4ed9-b67b-625fab313079-kube-api-access-5t7hs" (OuterVolumeSpecName: "kube-api-access-5t7hs") pod "a8b213a4-d6e2-4ed9-b67b-625fab313079" (UID: "a8b213a4-d6e2-4ed9-b67b-625fab313079"). InnerVolumeSpecName "kube-api-access-5t7hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.798536 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a8b213a4-d6e2-4ed9-b67b-625fab313079" (UID: "a8b213a4-d6e2-4ed9-b67b-625fab313079"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.834035 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8b213a4-d6e2-4ed9-b67b-625fab313079-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "a8b213a4-d6e2-4ed9-b67b-625fab313079" (UID: "a8b213a4-d6e2-4ed9-b67b-625fab313079"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.838772 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-inventory" (OuterVolumeSpecName: "inventory") pod "a8b213a4-d6e2-4ed9-b67b-625fab313079" (UID: "a8b213a4-d6e2-4ed9-b67b-625fab313079"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.852157 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a8b213a4-d6e2-4ed9-b67b-625fab313079" (UID: "a8b213a4-d6e2-4ed9-b67b-625fab313079"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.893747 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t7hs\" (UniqueName: \"kubernetes.io/projected/a8b213a4-d6e2-4ed9-b67b-625fab313079-kube-api-access-5t7hs\") on node \"crc\" DevicePath \"\"" Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.893788 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.893805 4792 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.893818 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8b213a4-d6e2-4ed9-b67b-625fab313079-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:48:06 crc kubenswrapper[4792]: I1127 17:48:06.893830 4792 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a8b213a4-d6e2-4ed9-b67b-625fab313079-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.133223 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" event={"ID":"a8b213a4-d6e2-4ed9-b67b-625fab313079","Type":"ContainerDied","Data":"999fcb2b73c7ca77ac71e5bcb4c3deece10c87af45c2512ea610ab8c115c5647"} Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.133265 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="999fcb2b73c7ca77ac71e5bcb4c3deece10c87af45c2512ea610ab8c115c5647" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.133352 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jn9hq" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.250163 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk"] Nov 27 17:48:07 crc kubenswrapper[4792]: E1127 17:48:07.250918 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b213a4-d6e2-4ed9-b67b-625fab313079" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.250946 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b213a4-d6e2-4ed9-b67b-625fab313079" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.251285 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b213a4-d6e2-4ed9-b67b-625fab313079" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.252474 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.255700 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.255700 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.255773 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.255876 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.256973 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.264327 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.276091 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk"] Nov 27 17:48:07 crc kubenswrapper[4792]: E1127 17:48:07.279307 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87368e9c_b9b2_499a_9825_de4ff047aabd.slice\": RecentStats: unable to find data in memory cache]" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.303610 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.303775 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.303890 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.303964 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksm82\" (UniqueName: \"kubernetes.io/projected/87368e9c-b9b2-499a-9825-de4ff047aabd-kube-api-access-ksm82\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.304075 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.304134 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.406277 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.406377 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksm82\" (UniqueName: \"kubernetes.io/projected/87368e9c-b9b2-499a-9825-de4ff047aabd-kube-api-access-ksm82\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.406483 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.406533 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.406604 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.406662 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.411956 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.412149 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.412179 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.412634 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.419395 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.425330 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksm82\" (UniqueName: \"kubernetes.io/projected/87368e9c-b9b2-499a-9825-de4ff047aabd-kube-api-access-ksm82\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:07 crc kubenswrapper[4792]: I1127 17:48:07.578619 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:48:08 crc kubenswrapper[4792]: I1127 17:48:08.164550 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk"] Nov 27 17:48:09 crc kubenswrapper[4792]: I1127 17:48:09.171551 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" event={"ID":"87368e9c-b9b2-499a-9825-de4ff047aabd","Type":"ContainerStarted","Data":"19a1f547e7452ce5ac76d0b6e67643fc3acfdf6f6f614cbabb69d159b8ce03b6"} Nov 27 17:48:09 crc kubenswrapper[4792]: I1127 17:48:09.172159 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" event={"ID":"87368e9c-b9b2-499a-9825-de4ff047aabd","Type":"ContainerStarted","Data":"1d2a71164756fb6fba0ba2be2484000a16bb4d4be9c80cdeb311ef73ec392076"} Nov 27 17:48:09 crc kubenswrapper[4792]: I1127 17:48:09.192942 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" podStartSLOduration=1.729514853 podStartE2EDuration="2.192926986s" podCreationTimestamp="2025-11-27 17:48:07 +0000 UTC" firstStartedPulling="2025-11-27 17:48:08.168764022 +0000 UTC m=+2310.511590330" lastFinishedPulling="2025-11-27 17:48:08.632176145 +0000 UTC m=+2310.975002463" observedRunningTime="2025-11-27 17:48:09.192767442 +0000 UTC m=+2311.535593790" watchObservedRunningTime="2025-11-27 17:48:09.192926986 +0000 UTC m=+2311.535753304" Nov 27 17:48:14 crc kubenswrapper[4792]: I1127 17:48:14.687153 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:48:14 crc kubenswrapper[4792]: E1127 17:48:14.688333 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:48:25 crc kubenswrapper[4792]: I1127 17:48:25.687815 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:48:25 crc kubenswrapper[4792]: E1127 17:48:25.690576 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:48:40 crc kubenswrapper[4792]: I1127 17:48:40.688223 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:48:40 crc kubenswrapper[4792]: E1127 17:48:40.689713 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:48:51 crc kubenswrapper[4792]: I1127 17:48:51.688242 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:48:51 crc kubenswrapper[4792]: E1127 17:48:51.689629 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:48:59 crc kubenswrapper[4792]: I1127 17:48:59.770700 4792 generic.go:334] "Generic (PLEG): container finished" podID="87368e9c-b9b2-499a-9825-de4ff047aabd" containerID="19a1f547e7452ce5ac76d0b6e67643fc3acfdf6f6f614cbabb69d159b8ce03b6" exitCode=0 Nov 27 17:48:59 crc kubenswrapper[4792]: I1127 17:48:59.770764 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" event={"ID":"87368e9c-b9b2-499a-9825-de4ff047aabd","Type":"ContainerDied","Data":"19a1f547e7452ce5ac76d0b6e67643fc3acfdf6f6f614cbabb69d159b8ce03b6"} Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.269763 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.402090 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-neutron-ovn-metadata-agent-neutron-config-0\") pod \"87368e9c-b9b2-499a-9825-de4ff047aabd\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.402176 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-ssh-key\") pod \"87368e9c-b9b2-499a-9825-de4ff047aabd\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.402204 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-nova-metadata-neutron-config-0\") pod \"87368e9c-b9b2-499a-9825-de4ff047aabd\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.402285 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksm82\" (UniqueName: \"kubernetes.io/projected/87368e9c-b9b2-499a-9825-de4ff047aabd-kube-api-access-ksm82\") pod \"87368e9c-b9b2-499a-9825-de4ff047aabd\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.402333 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-inventory\") pod \"87368e9c-b9b2-499a-9825-de4ff047aabd\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.402518 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-neutron-metadata-combined-ca-bundle\") pod \"87368e9c-b9b2-499a-9825-de4ff047aabd\" (UID: \"87368e9c-b9b2-499a-9825-de4ff047aabd\") " Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.407931 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87368e9c-b9b2-499a-9825-de4ff047aabd-kube-api-access-ksm82" (OuterVolumeSpecName: "kube-api-access-ksm82") pod "87368e9c-b9b2-499a-9825-de4ff047aabd" (UID: "87368e9c-b9b2-499a-9825-de4ff047aabd"). InnerVolumeSpecName "kube-api-access-ksm82". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.408898 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "87368e9c-b9b2-499a-9825-de4ff047aabd" (UID: "87368e9c-b9b2-499a-9825-de4ff047aabd"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.437821 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "87368e9c-b9b2-499a-9825-de4ff047aabd" (UID: "87368e9c-b9b2-499a-9825-de4ff047aabd"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.438255 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "87368e9c-b9b2-499a-9825-de4ff047aabd" (UID: "87368e9c-b9b2-499a-9825-de4ff047aabd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.445803 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-inventory" (OuterVolumeSpecName: "inventory") pod "87368e9c-b9b2-499a-9825-de4ff047aabd" (UID: "87368e9c-b9b2-499a-9825-de4ff047aabd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.446339 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "87368e9c-b9b2-499a-9825-de4ff047aabd" (UID: "87368e9c-b9b2-499a-9825-de4ff047aabd"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.504780 4792 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.505019 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.505028 4792 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.505038 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksm82\" (UniqueName: \"kubernetes.io/projected/87368e9c-b9b2-499a-9825-de4ff047aabd-kube-api-access-ksm82\") on node \"crc\" DevicePath \"\"" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.505048 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.505056 4792 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87368e9c-b9b2-499a-9825-de4ff047aabd-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.806155 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" event={"ID":"87368e9c-b9b2-499a-9825-de4ff047aabd","Type":"ContainerDied","Data":"1d2a71164756fb6fba0ba2be2484000a16bb4d4be9c80cdeb311ef73ec392076"} Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.806215 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d2a71164756fb6fba0ba2be2484000a16bb4d4be9c80cdeb311ef73ec392076" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.806226 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.899493 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9"] Nov 27 17:49:01 crc kubenswrapper[4792]: E1127 17:49:01.900100 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87368e9c-b9b2-499a-9825-de4ff047aabd" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.900126 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="87368e9c-b9b2-499a-9825-de4ff047aabd" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.900454 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="87368e9c-b9b2-499a-9825-de4ff047aabd" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.901274 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.904398 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.904570 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.904679 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.904951 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.905090 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:49:01 crc kubenswrapper[4792]: I1127 17:49:01.918061 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9"] Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.031913 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c92t9\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.032060 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c92t9\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.032125 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c92t9\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.032211 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5xr2\" (UniqueName: \"kubernetes.io/projected/c1228795-b08e-4f02-ac5c-a9bc71058d23-kube-api-access-s5xr2\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c92t9\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.032267 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c92t9\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.133883 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5xr2\" (UniqueName: \"kubernetes.io/projected/c1228795-b08e-4f02-ac5c-a9bc71058d23-kube-api-access-s5xr2\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c92t9\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.133980 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c92t9\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.134043 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c92t9\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.134134 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c92t9\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.134168 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c92t9\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.138608 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c92t9\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.139404 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c92t9\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.139423 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c92t9\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.145488 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c92t9\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.150193 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5xr2\" (UniqueName: \"kubernetes.io/projected/c1228795-b08e-4f02-ac5c-a9bc71058d23-kube-api-access-s5xr2\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-c92t9\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.222695 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.692323 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:49:02 crc kubenswrapper[4792]: E1127 17:49:02.693121 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.780797 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9"] Nov 27 17:49:02 crc kubenswrapper[4792]: I1127 17:49:02.816551 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" event={"ID":"c1228795-b08e-4f02-ac5c-a9bc71058d23","Type":"ContainerStarted","Data":"5cf2e4f83380fff3f1ca7116b9e779041091a4962a3ce93bad06627137cea47a"} Nov 27 17:49:03 crc kubenswrapper[4792]: I1127 17:49:03.827423 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" event={"ID":"c1228795-b08e-4f02-ac5c-a9bc71058d23","Type":"ContainerStarted","Data":"b017d3c98356348c093bba068a4e38aace656e5a4397bdb3d1fc1d0aa5dec7c4"} Nov 27 17:49:03 crc kubenswrapper[4792]: I1127 17:49:03.847479 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" podStartSLOduration=2.359235537 podStartE2EDuration="2.847457289s" podCreationTimestamp="2025-11-27 17:49:01 +0000 UTC" firstStartedPulling="2025-11-27 17:49:02.784709564 +0000 UTC m=+2365.127535882" lastFinishedPulling="2025-11-27 17:49:03.272931316 +0000 UTC m=+2365.615757634" observedRunningTime="2025-11-27 17:49:03.842738062 +0000 UTC m=+2366.185564380" watchObservedRunningTime="2025-11-27 17:49:03.847457289 +0000 UTC m=+2366.190283607" Nov 27 17:49:14 crc kubenswrapper[4792]: I1127 17:49:14.687373 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:49:14 crc kubenswrapper[4792]: E1127 17:49:14.688272 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:49:27 crc kubenswrapper[4792]: I1127 17:49:27.687457 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:49:27 crc kubenswrapper[4792]: E1127 17:49:27.688576 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:49:41 crc kubenswrapper[4792]: I1127 17:49:41.687501 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:49:41 crc kubenswrapper[4792]: E1127 17:49:41.688979 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:49:55 crc kubenswrapper[4792]: I1127 17:49:55.687634 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:49:55 crc kubenswrapper[4792]: E1127 17:49:55.688471 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:50:08 crc kubenswrapper[4792]: I1127 17:50:08.696069 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:50:08 crc kubenswrapper[4792]: E1127 17:50:08.697203 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:50:22 crc kubenswrapper[4792]: I1127 17:50:22.687512 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:50:22 crc kubenswrapper[4792]: E1127 17:50:22.688859 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:50:35 crc kubenswrapper[4792]: I1127 17:50:35.687454 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:50:35 crc kubenswrapper[4792]: E1127 17:50:35.688435 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:50:48 crc kubenswrapper[4792]: I1127 17:50:48.720466 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:50:48 crc kubenswrapper[4792]: E1127 17:50:48.721912 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:51:03 crc kubenswrapper[4792]: I1127 17:51:03.687591 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:51:03 crc kubenswrapper[4792]: E1127 17:51:03.688583 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:51:18 crc kubenswrapper[4792]: I1127 17:51:18.698197 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:51:18 crc kubenswrapper[4792]: E1127 17:51:18.699097 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:51:31 crc kubenswrapper[4792]: I1127 17:51:31.687718 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:51:31 crc kubenswrapper[4792]: E1127 17:51:31.688457 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:51:42 crc kubenswrapper[4792]: I1127 17:51:42.687291 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:51:43 crc kubenswrapper[4792]: I1127 17:51:43.194782 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"87957b4092fe0fede78669473477b181921b1cf815808011da28af91b060d640"} Nov 27 17:52:46 crc kubenswrapper[4792]: I1127 17:52:46.801285 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-54mrz"] Nov 27 17:52:46 crc kubenswrapper[4792]: I1127 17:52:46.804804 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:52:46 crc kubenswrapper[4792]: I1127 17:52:46.814431 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-54mrz"] Nov 27 17:52:46 crc kubenswrapper[4792]: I1127 17:52:46.858858 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92xh8\" (UniqueName: \"kubernetes.io/projected/9790c7b4-bc61-4bae-b551-3151a0ad60c0-kube-api-access-92xh8\") pod \"certified-operators-54mrz\" (UID: \"9790c7b4-bc61-4bae-b551-3151a0ad60c0\") " pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:52:46 crc kubenswrapper[4792]: I1127 17:52:46.859111 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9790c7b4-bc61-4bae-b551-3151a0ad60c0-utilities\") pod \"certified-operators-54mrz\" (UID: \"9790c7b4-bc61-4bae-b551-3151a0ad60c0\") " pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:52:46 crc kubenswrapper[4792]: I1127 17:52:46.859201 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9790c7b4-bc61-4bae-b551-3151a0ad60c0-catalog-content\") pod \"certified-operators-54mrz\" (UID: \"9790c7b4-bc61-4bae-b551-3151a0ad60c0\") " pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:52:46 crc kubenswrapper[4792]: I1127 17:52:46.962531 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92xh8\" (UniqueName: \"kubernetes.io/projected/9790c7b4-bc61-4bae-b551-3151a0ad60c0-kube-api-access-92xh8\") pod \"certified-operators-54mrz\" (UID: \"9790c7b4-bc61-4bae-b551-3151a0ad60c0\") " pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:52:46 crc kubenswrapper[4792]: I1127 17:52:46.962610 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9790c7b4-bc61-4bae-b551-3151a0ad60c0-utilities\") pod \"certified-operators-54mrz\" (UID: \"9790c7b4-bc61-4bae-b551-3151a0ad60c0\") " pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:52:46 crc kubenswrapper[4792]: I1127 17:52:46.962691 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9790c7b4-bc61-4bae-b551-3151a0ad60c0-catalog-content\") pod \"certified-operators-54mrz\" (UID: \"9790c7b4-bc61-4bae-b551-3151a0ad60c0\") " pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:52:46 crc kubenswrapper[4792]: I1127 17:52:46.963673 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9790c7b4-bc61-4bae-b551-3151a0ad60c0-utilities\") pod \"certified-operators-54mrz\" (UID: \"9790c7b4-bc61-4bae-b551-3151a0ad60c0\") " pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:52:46 crc kubenswrapper[4792]: I1127 17:52:46.963695 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9790c7b4-bc61-4bae-b551-3151a0ad60c0-catalog-content\") pod \"certified-operators-54mrz\" (UID: \"9790c7b4-bc61-4bae-b551-3151a0ad60c0\") " pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:52:46 crc kubenswrapper[4792]: I1127 17:52:46.988907 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92xh8\" (UniqueName: \"kubernetes.io/projected/9790c7b4-bc61-4bae-b551-3151a0ad60c0-kube-api-access-92xh8\") pod \"certified-operators-54mrz\" (UID: \"9790c7b4-bc61-4bae-b551-3151a0ad60c0\") " pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:52:47 crc kubenswrapper[4792]: I1127 17:52:47.131125 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:52:47 crc kubenswrapper[4792]: I1127 17:52:47.645371 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-54mrz"] Nov 27 17:52:47 crc kubenswrapper[4792]: I1127 17:52:47.955676 4792 generic.go:334] "Generic (PLEG): container finished" podID="9790c7b4-bc61-4bae-b551-3151a0ad60c0" containerID="41ec9fdef62297284a72ecb1674369ee52fb888721b4e743befab53fe51ade05" exitCode=0 Nov 27 17:52:47 crc kubenswrapper[4792]: I1127 17:52:47.955775 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54mrz" event={"ID":"9790c7b4-bc61-4bae-b551-3151a0ad60c0","Type":"ContainerDied","Data":"41ec9fdef62297284a72ecb1674369ee52fb888721b4e743befab53fe51ade05"} Nov 27 17:52:47 crc kubenswrapper[4792]: I1127 17:52:47.955810 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54mrz" event={"ID":"9790c7b4-bc61-4bae-b551-3151a0ad60c0","Type":"ContainerStarted","Data":"8f758263fe8d6e2517f5d282af81429b3c9f9b2e881bd8f9bd91ac0497fceae1"} Nov 27 17:52:47 crc kubenswrapper[4792]: I1127 17:52:47.959094 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 17:52:48 crc kubenswrapper[4792]: I1127 17:52:48.978490 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54mrz" event={"ID":"9790c7b4-bc61-4bae-b551-3151a0ad60c0","Type":"ContainerStarted","Data":"f2bd631823d9948c252baf113678a247b760f9a7e6ed619d66a0928fd2dd770f"} Nov 27 17:52:51 crc kubenswrapper[4792]: I1127 17:52:50.999697 4792 generic.go:334] "Generic (PLEG): container finished" podID="9790c7b4-bc61-4bae-b551-3151a0ad60c0" containerID="f2bd631823d9948c252baf113678a247b760f9a7e6ed619d66a0928fd2dd770f" exitCode=0 Nov 27 17:52:51 crc kubenswrapper[4792]: I1127 17:52:50.999812 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54mrz" event={"ID":"9790c7b4-bc61-4bae-b551-3151a0ad60c0","Type":"ContainerDied","Data":"f2bd631823d9948c252baf113678a247b760f9a7e6ed619d66a0928fd2dd770f"} Nov 27 17:52:53 crc kubenswrapper[4792]: I1127 17:52:53.023595 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54mrz" event={"ID":"9790c7b4-bc61-4bae-b551-3151a0ad60c0","Type":"ContainerStarted","Data":"8b7bc88c7e8fb5cac687b3e4afea60a4a3a3012bf611565b5ecdd18001bd4171"} Nov 27 17:52:53 crc kubenswrapper[4792]: I1127 17:52:53.052999 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-54mrz" podStartSLOduration=2.901694962 podStartE2EDuration="7.052976357s" podCreationTimestamp="2025-11-27 17:52:46 +0000 UTC" firstStartedPulling="2025-11-27 17:52:47.958588248 +0000 UTC m=+2590.301414576" lastFinishedPulling="2025-11-27 17:52:52.109869653 +0000 UTC m=+2594.452695971" observedRunningTime="2025-11-27 17:52:53.043628115 +0000 UTC m=+2595.386454433" watchObservedRunningTime="2025-11-27 17:52:53.052976357 +0000 UTC m=+2595.395802685" Nov 27 17:52:57 crc kubenswrapper[4792]: I1127 17:52:57.132175 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:52:57 crc kubenswrapper[4792]: I1127 17:52:57.132864 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:52:57 crc kubenswrapper[4792]: I1127 17:52:57.188004 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:52:58 crc kubenswrapper[4792]: I1127 17:52:58.124880 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:52:58 crc kubenswrapper[4792]: I1127 17:52:58.184980 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-54mrz"] Nov 27 17:53:00 crc kubenswrapper[4792]: I1127 17:53:00.099262 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-54mrz" podUID="9790c7b4-bc61-4bae-b551-3151a0ad60c0" containerName="registry-server" containerID="cri-o://8b7bc88c7e8fb5cac687b3e4afea60a4a3a3012bf611565b5ecdd18001bd4171" gracePeriod=2 Nov 27 17:53:00 crc kubenswrapper[4792]: I1127 17:53:00.607603 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:53:00 crc kubenswrapper[4792]: I1127 17:53:00.734972 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9790c7b4-bc61-4bae-b551-3151a0ad60c0-utilities\") pod \"9790c7b4-bc61-4bae-b551-3151a0ad60c0\" (UID: \"9790c7b4-bc61-4bae-b551-3151a0ad60c0\") " Nov 27 17:53:00 crc kubenswrapper[4792]: I1127 17:53:00.735086 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92xh8\" (UniqueName: \"kubernetes.io/projected/9790c7b4-bc61-4bae-b551-3151a0ad60c0-kube-api-access-92xh8\") pod \"9790c7b4-bc61-4bae-b551-3151a0ad60c0\" (UID: \"9790c7b4-bc61-4bae-b551-3151a0ad60c0\") " Nov 27 17:53:00 crc kubenswrapper[4792]: I1127 17:53:00.735189 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9790c7b4-bc61-4bae-b551-3151a0ad60c0-catalog-content\") pod \"9790c7b4-bc61-4bae-b551-3151a0ad60c0\" (UID: \"9790c7b4-bc61-4bae-b551-3151a0ad60c0\") " Nov 27 17:53:00 crc kubenswrapper[4792]: I1127 17:53:00.735878 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9790c7b4-bc61-4bae-b551-3151a0ad60c0-utilities" (OuterVolumeSpecName: "utilities") pod "9790c7b4-bc61-4bae-b551-3151a0ad60c0" (UID: "9790c7b4-bc61-4bae-b551-3151a0ad60c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:53:00 crc kubenswrapper[4792]: I1127 17:53:00.739917 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9790c7b4-bc61-4bae-b551-3151a0ad60c0-kube-api-access-92xh8" (OuterVolumeSpecName: "kube-api-access-92xh8") pod "9790c7b4-bc61-4bae-b551-3151a0ad60c0" (UID: "9790c7b4-bc61-4bae-b551-3151a0ad60c0"). InnerVolumeSpecName "kube-api-access-92xh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:53:00 crc kubenswrapper[4792]: I1127 17:53:00.786427 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9790c7b4-bc61-4bae-b551-3151a0ad60c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9790c7b4-bc61-4bae-b551-3151a0ad60c0" (UID: "9790c7b4-bc61-4bae-b551-3151a0ad60c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:53:00 crc kubenswrapper[4792]: I1127 17:53:00.838066 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9790c7b4-bc61-4bae-b551-3151a0ad60c0-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:53:00 crc kubenswrapper[4792]: I1127 17:53:00.838147 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92xh8\" (UniqueName: \"kubernetes.io/projected/9790c7b4-bc61-4bae-b551-3151a0ad60c0-kube-api-access-92xh8\") on node \"crc\" DevicePath \"\"" Nov 27 17:53:00 crc kubenswrapper[4792]: I1127 17:53:00.838167 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9790c7b4-bc61-4bae-b551-3151a0ad60c0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:53:01 crc kubenswrapper[4792]: I1127 17:53:01.112676 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54mrz" event={"ID":"9790c7b4-bc61-4bae-b551-3151a0ad60c0","Type":"ContainerDied","Data":"8b7bc88c7e8fb5cac687b3e4afea60a4a3a3012bf611565b5ecdd18001bd4171"} Nov 27 17:53:01 crc kubenswrapper[4792]: I1127 17:53:01.112738 4792 scope.go:117] "RemoveContainer" containerID="8b7bc88c7e8fb5cac687b3e4afea60a4a3a3012bf611565b5ecdd18001bd4171" Nov 27 17:53:01 crc kubenswrapper[4792]: I1127 17:53:01.112688 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-54mrz" Nov 27 17:53:01 crc kubenswrapper[4792]: I1127 17:53:01.112624 4792 generic.go:334] "Generic (PLEG): container finished" podID="9790c7b4-bc61-4bae-b551-3151a0ad60c0" containerID="8b7bc88c7e8fb5cac687b3e4afea60a4a3a3012bf611565b5ecdd18001bd4171" exitCode=0 Nov 27 17:53:01 crc kubenswrapper[4792]: I1127 17:53:01.112951 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-54mrz" event={"ID":"9790c7b4-bc61-4bae-b551-3151a0ad60c0","Type":"ContainerDied","Data":"8f758263fe8d6e2517f5d282af81429b3c9f9b2e881bd8f9bd91ac0497fceae1"} Nov 27 17:53:01 crc kubenswrapper[4792]: I1127 17:53:01.158422 4792 scope.go:117] "RemoveContainer" containerID="f2bd631823d9948c252baf113678a247b760f9a7e6ed619d66a0928fd2dd770f" Nov 27 17:53:01 crc kubenswrapper[4792]: I1127 17:53:01.159307 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-54mrz"] Nov 27 17:53:01 crc kubenswrapper[4792]: I1127 17:53:01.172078 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-54mrz"] Nov 27 17:53:01 crc kubenswrapper[4792]: I1127 17:53:01.184197 4792 scope.go:117] "RemoveContainer" containerID="41ec9fdef62297284a72ecb1674369ee52fb888721b4e743befab53fe51ade05" Nov 27 17:53:01 crc kubenswrapper[4792]: I1127 17:53:01.242195 4792 scope.go:117] "RemoveContainer" containerID="8b7bc88c7e8fb5cac687b3e4afea60a4a3a3012bf611565b5ecdd18001bd4171" Nov 27 17:53:01 crc kubenswrapper[4792]: E1127 17:53:01.242705 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b7bc88c7e8fb5cac687b3e4afea60a4a3a3012bf611565b5ecdd18001bd4171\": container with ID starting with 8b7bc88c7e8fb5cac687b3e4afea60a4a3a3012bf611565b5ecdd18001bd4171 not found: ID does not exist" containerID="8b7bc88c7e8fb5cac687b3e4afea60a4a3a3012bf611565b5ecdd18001bd4171" Nov 27 17:53:01 crc kubenswrapper[4792]: I1127 17:53:01.242752 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b7bc88c7e8fb5cac687b3e4afea60a4a3a3012bf611565b5ecdd18001bd4171"} err="failed to get container status \"8b7bc88c7e8fb5cac687b3e4afea60a4a3a3012bf611565b5ecdd18001bd4171\": rpc error: code = NotFound desc = could not find container \"8b7bc88c7e8fb5cac687b3e4afea60a4a3a3012bf611565b5ecdd18001bd4171\": container with ID starting with 8b7bc88c7e8fb5cac687b3e4afea60a4a3a3012bf611565b5ecdd18001bd4171 not found: ID does not exist" Nov 27 17:53:01 crc kubenswrapper[4792]: I1127 17:53:01.242786 4792 scope.go:117] "RemoveContainer" containerID="f2bd631823d9948c252baf113678a247b760f9a7e6ed619d66a0928fd2dd770f" Nov 27 17:53:01 crc kubenswrapper[4792]: E1127 17:53:01.243072 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2bd631823d9948c252baf113678a247b760f9a7e6ed619d66a0928fd2dd770f\": container with ID starting with f2bd631823d9948c252baf113678a247b760f9a7e6ed619d66a0928fd2dd770f not found: ID does not exist" containerID="f2bd631823d9948c252baf113678a247b760f9a7e6ed619d66a0928fd2dd770f" Nov 27 17:53:01 crc kubenswrapper[4792]: I1127 17:53:01.243109 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2bd631823d9948c252baf113678a247b760f9a7e6ed619d66a0928fd2dd770f"} err="failed to get container status \"f2bd631823d9948c252baf113678a247b760f9a7e6ed619d66a0928fd2dd770f\": rpc error: code = NotFound desc = could not find container \"f2bd631823d9948c252baf113678a247b760f9a7e6ed619d66a0928fd2dd770f\": container with ID starting with f2bd631823d9948c252baf113678a247b760f9a7e6ed619d66a0928fd2dd770f not found: ID does not exist" Nov 27 17:53:01 crc kubenswrapper[4792]: I1127 17:53:01.243124 4792 scope.go:117] "RemoveContainer" containerID="41ec9fdef62297284a72ecb1674369ee52fb888721b4e743befab53fe51ade05" Nov 27 17:53:01 crc kubenswrapper[4792]: E1127 17:53:01.243350 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ec9fdef62297284a72ecb1674369ee52fb888721b4e743befab53fe51ade05\": container with ID starting with 41ec9fdef62297284a72ecb1674369ee52fb888721b4e743befab53fe51ade05 not found: ID does not exist" containerID="41ec9fdef62297284a72ecb1674369ee52fb888721b4e743befab53fe51ade05" Nov 27 17:53:01 crc kubenswrapper[4792]: I1127 17:53:01.243377 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ec9fdef62297284a72ecb1674369ee52fb888721b4e743befab53fe51ade05"} err="failed to get container status \"41ec9fdef62297284a72ecb1674369ee52fb888721b4e743befab53fe51ade05\": rpc error: code = NotFound desc = could not find container \"41ec9fdef62297284a72ecb1674369ee52fb888721b4e743befab53fe51ade05\": container with ID starting with 41ec9fdef62297284a72ecb1674369ee52fb888721b4e743befab53fe51ade05 not found: ID does not exist" Nov 27 17:53:02 crc kubenswrapper[4792]: I1127 17:53:02.709081 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9790c7b4-bc61-4bae-b551-3151a0ad60c0" path="/var/lib/kubelet/pods/9790c7b4-bc61-4bae-b551-3151a0ad60c0/volumes" Nov 27 17:53:08 crc kubenswrapper[4792]: I1127 17:53:08.206589 4792 generic.go:334] "Generic (PLEG): container finished" podID="c1228795-b08e-4f02-ac5c-a9bc71058d23" containerID="b017d3c98356348c093bba068a4e38aace656e5a4397bdb3d1fc1d0aa5dec7c4" exitCode=0 Nov 27 17:53:08 crc kubenswrapper[4792]: I1127 17:53:08.206685 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" event={"ID":"c1228795-b08e-4f02-ac5c-a9bc71058d23","Type":"ContainerDied","Data":"b017d3c98356348c093bba068a4e38aace656e5a4397bdb3d1fc1d0aa5dec7c4"} Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.721691 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.855993 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-libvirt-secret-0\") pod \"c1228795-b08e-4f02-ac5c-a9bc71058d23\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.856443 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-inventory\") pod \"c1228795-b08e-4f02-ac5c-a9bc71058d23\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.856469 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5xr2\" (UniqueName: \"kubernetes.io/projected/c1228795-b08e-4f02-ac5c-a9bc71058d23-kube-api-access-s5xr2\") pod \"c1228795-b08e-4f02-ac5c-a9bc71058d23\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.856624 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-libvirt-combined-ca-bundle\") pod \"c1228795-b08e-4f02-ac5c-a9bc71058d23\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.856662 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-ssh-key\") pod \"c1228795-b08e-4f02-ac5c-a9bc71058d23\" (UID: \"c1228795-b08e-4f02-ac5c-a9bc71058d23\") " Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.863161 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c1228795-b08e-4f02-ac5c-a9bc71058d23" (UID: "c1228795-b08e-4f02-ac5c-a9bc71058d23"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.863619 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1228795-b08e-4f02-ac5c-a9bc71058d23-kube-api-access-s5xr2" (OuterVolumeSpecName: "kube-api-access-s5xr2") pod "c1228795-b08e-4f02-ac5c-a9bc71058d23" (UID: "c1228795-b08e-4f02-ac5c-a9bc71058d23"). InnerVolumeSpecName "kube-api-access-s5xr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.896201 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-inventory" (OuterVolumeSpecName: "inventory") pod "c1228795-b08e-4f02-ac5c-a9bc71058d23" (UID: "c1228795-b08e-4f02-ac5c-a9bc71058d23"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.896364 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c1228795-b08e-4f02-ac5c-a9bc71058d23" (UID: "c1228795-b08e-4f02-ac5c-a9bc71058d23"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.902522 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "c1228795-b08e-4f02-ac5c-a9bc71058d23" (UID: "c1228795-b08e-4f02-ac5c-a9bc71058d23"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.962033 4792 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.962071 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.962082 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5xr2\" (UniqueName: \"kubernetes.io/projected/c1228795-b08e-4f02-ac5c-a9bc71058d23-kube-api-access-s5xr2\") on node \"crc\" DevicePath \"\"" Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.962093 4792 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:53:09 crc kubenswrapper[4792]: I1127 17:53:09.962103 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c1228795-b08e-4f02-ac5c-a9bc71058d23-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.226935 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" event={"ID":"c1228795-b08e-4f02-ac5c-a9bc71058d23","Type":"ContainerDied","Data":"5cf2e4f83380fff3f1ca7116b9e779041091a4962a3ce93bad06627137cea47a"} Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.226972 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cf2e4f83380fff3f1ca7116b9e779041091a4962a3ce93bad06627137cea47a" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.227027 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-c92t9" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.438815 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn"] Nov 27 17:53:10 crc kubenswrapper[4792]: E1127 17:53:10.439280 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9790c7b4-bc61-4bae-b551-3151a0ad60c0" containerName="extract-content" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.439295 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9790c7b4-bc61-4bae-b551-3151a0ad60c0" containerName="extract-content" Nov 27 17:53:10 crc kubenswrapper[4792]: E1127 17:53:10.439312 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9790c7b4-bc61-4bae-b551-3151a0ad60c0" containerName="registry-server" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.439320 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9790c7b4-bc61-4bae-b551-3151a0ad60c0" containerName="registry-server" Nov 27 17:53:10 crc kubenswrapper[4792]: E1127 17:53:10.439334 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9790c7b4-bc61-4bae-b551-3151a0ad60c0" containerName="extract-utilities" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.439340 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="9790c7b4-bc61-4bae-b551-3151a0ad60c0" containerName="extract-utilities" Nov 27 17:53:10 crc kubenswrapper[4792]: E1127 17:53:10.439355 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1228795-b08e-4f02-ac5c-a9bc71058d23" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.439362 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1228795-b08e-4f02-ac5c-a9bc71058d23" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.442148 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1228795-b08e-4f02-ac5c-a9bc71058d23" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.442178 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="9790c7b4-bc61-4bae-b551-3151a0ad60c0" containerName="registry-server" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.443007 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.454661 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.454813 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.454953 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.455133 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.455369 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.455395 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.455526 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.455603 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn"] Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.578884 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ds6r\" (UniqueName: \"kubernetes.io/projected/83d3f635-5c64-4827-a54d-1b21ca1b6570-kube-api-access-6ds6r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.579071 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.579161 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.579207 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.579331 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.579379 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.579500 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.579532 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.579563 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.681765 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.681841 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.681950 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.681994 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.682090 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.682121 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.682155 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.682218 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ds6r\" (UniqueName: \"kubernetes.io/projected/83d3f635-5c64-4827-a54d-1b21ca1b6570-kube-api-access-6ds6r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.682339 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.682817 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.687771 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.689313 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.694390 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.694404 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.695468 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.696026 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.696977 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.715723 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ds6r\" (UniqueName: \"kubernetes.io/projected/83d3f635-5c64-4827-a54d-1b21ca1b6570-kube-api-access-6ds6r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-fqdvn\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:10 crc kubenswrapper[4792]: I1127 17:53:10.788309 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:53:11 crc kubenswrapper[4792]: I1127 17:53:11.394214 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn"] Nov 27 17:53:12 crc kubenswrapper[4792]: I1127 17:53:12.261518 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" event={"ID":"83d3f635-5c64-4827-a54d-1b21ca1b6570","Type":"ContainerStarted","Data":"b1926882f605deedfe615ccf9ead46c13072c9a0e84463a1cf7527b75a334396"} Nov 27 17:53:12 crc kubenswrapper[4792]: I1127 17:53:12.265253 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" event={"ID":"83d3f635-5c64-4827-a54d-1b21ca1b6570","Type":"ContainerStarted","Data":"50429ed0b2becd2f19fa0061a90787f572424e13c88153aef41d0fc1fbd4dabc"} Nov 27 17:53:12 crc kubenswrapper[4792]: I1127 17:53:12.281733 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" podStartSLOduration=1.760105896 podStartE2EDuration="2.281715385s" podCreationTimestamp="2025-11-27 17:53:10 +0000 UTC" firstStartedPulling="2025-11-27 17:53:11.387090004 +0000 UTC m=+2613.729916322" lastFinishedPulling="2025-11-27 17:53:11.908699493 +0000 UTC m=+2614.251525811" observedRunningTime="2025-11-27 17:53:12.281313056 +0000 UTC m=+2614.624139394" watchObservedRunningTime="2025-11-27 17:53:12.281715385 +0000 UTC m=+2614.624541703" Nov 27 17:53:27 crc kubenswrapper[4792]: I1127 17:53:27.778777 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vhqbd"] Nov 27 17:53:27 crc kubenswrapper[4792]: I1127 17:53:27.782872 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhqbd" Nov 27 17:53:27 crc kubenswrapper[4792]: I1127 17:53:27.795139 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vhqbd"] Nov 27 17:53:27 crc kubenswrapper[4792]: I1127 17:53:27.883375 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83213a0e-ad1a-4969-bc77-731e9951f0e9-utilities\") pod \"community-operators-vhqbd\" (UID: \"83213a0e-ad1a-4969-bc77-731e9951f0e9\") " pod="openshift-marketplace/community-operators-vhqbd" Nov 27 17:53:27 crc kubenswrapper[4792]: I1127 17:53:27.883668 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83213a0e-ad1a-4969-bc77-731e9951f0e9-catalog-content\") pod \"community-operators-vhqbd\" (UID: \"83213a0e-ad1a-4969-bc77-731e9951f0e9\") " pod="openshift-marketplace/community-operators-vhqbd" Nov 27 17:53:27 crc kubenswrapper[4792]: I1127 17:53:27.883891 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c2mh\" (UniqueName: \"kubernetes.io/projected/83213a0e-ad1a-4969-bc77-731e9951f0e9-kube-api-access-7c2mh\") pod \"community-operators-vhqbd\" (UID: \"83213a0e-ad1a-4969-bc77-731e9951f0e9\") " pod="openshift-marketplace/community-operators-vhqbd" Nov 27 17:53:27 crc kubenswrapper[4792]: I1127 17:53:27.986387 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c2mh\" (UniqueName: \"kubernetes.io/projected/83213a0e-ad1a-4969-bc77-731e9951f0e9-kube-api-access-7c2mh\") pod \"community-operators-vhqbd\" (UID: \"83213a0e-ad1a-4969-bc77-731e9951f0e9\") " pod="openshift-marketplace/community-operators-vhqbd" Nov 27 17:53:27 crc kubenswrapper[4792]: I1127 17:53:27.986556 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83213a0e-ad1a-4969-bc77-731e9951f0e9-utilities\") pod \"community-operators-vhqbd\" (UID: \"83213a0e-ad1a-4969-bc77-731e9951f0e9\") " pod="openshift-marketplace/community-operators-vhqbd" Nov 27 17:53:27 crc kubenswrapper[4792]: I1127 17:53:27.986606 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83213a0e-ad1a-4969-bc77-731e9951f0e9-catalog-content\") pod \"community-operators-vhqbd\" (UID: \"83213a0e-ad1a-4969-bc77-731e9951f0e9\") " pod="openshift-marketplace/community-operators-vhqbd" Nov 27 17:53:27 crc kubenswrapper[4792]: I1127 17:53:27.987223 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83213a0e-ad1a-4969-bc77-731e9951f0e9-catalog-content\") pod \"community-operators-vhqbd\" (UID: \"83213a0e-ad1a-4969-bc77-731e9951f0e9\") " pod="openshift-marketplace/community-operators-vhqbd" Nov 27 17:53:27 crc kubenswrapper[4792]: I1127 17:53:27.987423 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83213a0e-ad1a-4969-bc77-731e9951f0e9-utilities\") pod \"community-operators-vhqbd\" (UID: \"83213a0e-ad1a-4969-bc77-731e9951f0e9\") " pod="openshift-marketplace/community-operators-vhqbd" Nov 27 17:53:28 crc kubenswrapper[4792]: I1127 17:53:28.021076 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c2mh\" (UniqueName: \"kubernetes.io/projected/83213a0e-ad1a-4969-bc77-731e9951f0e9-kube-api-access-7c2mh\") pod \"community-operators-vhqbd\" (UID: \"83213a0e-ad1a-4969-bc77-731e9951f0e9\") " pod="openshift-marketplace/community-operators-vhqbd" Nov 27 17:53:28 crc kubenswrapper[4792]: I1127 17:53:28.103520 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhqbd" Nov 27 17:53:28 crc kubenswrapper[4792]: W1127 17:53:28.700002 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83213a0e_ad1a_4969_bc77_731e9951f0e9.slice/crio-e48bc1842e0e8833ee8499f863dbadeb842140afade90aa524f679e9dbe691be WatchSource:0}: Error finding container e48bc1842e0e8833ee8499f863dbadeb842140afade90aa524f679e9dbe691be: Status 404 returned error can't find the container with id e48bc1842e0e8833ee8499f863dbadeb842140afade90aa524f679e9dbe691be Nov 27 17:53:28 crc kubenswrapper[4792]: I1127 17:53:28.711440 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vhqbd"] Nov 27 17:53:29 crc kubenswrapper[4792]: I1127 17:53:29.465805 4792 generic.go:334] "Generic (PLEG): container finished" podID="83213a0e-ad1a-4969-bc77-731e9951f0e9" containerID="a60e01eec90d1d9f728ff353562e5bdb29423e7c3cef8df1efa936c35fe0b5c0" exitCode=0 Nov 27 17:53:29 crc kubenswrapper[4792]: I1127 17:53:29.465855 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhqbd" event={"ID":"83213a0e-ad1a-4969-bc77-731e9951f0e9","Type":"ContainerDied","Data":"a60e01eec90d1d9f728ff353562e5bdb29423e7c3cef8df1efa936c35fe0b5c0"} Nov 27 17:53:29 crc kubenswrapper[4792]: I1127 17:53:29.467092 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhqbd" event={"ID":"83213a0e-ad1a-4969-bc77-731e9951f0e9","Type":"ContainerStarted","Data":"e48bc1842e0e8833ee8499f863dbadeb842140afade90aa524f679e9dbe691be"} Nov 27 17:53:35 crc kubenswrapper[4792]: I1127 17:53:35.552505 4792 generic.go:334] "Generic (PLEG): container finished" podID="83213a0e-ad1a-4969-bc77-731e9951f0e9" containerID="f142522968b92d0b0f0f1584167887003954105061513a4eebd2a65b3effc177" exitCode=0 Nov 27 17:53:35 crc kubenswrapper[4792]: I1127 17:53:35.552698 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhqbd" event={"ID":"83213a0e-ad1a-4969-bc77-731e9951f0e9","Type":"ContainerDied","Data":"f142522968b92d0b0f0f1584167887003954105061513a4eebd2a65b3effc177"} Nov 27 17:53:37 crc kubenswrapper[4792]: I1127 17:53:37.574742 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhqbd" event={"ID":"83213a0e-ad1a-4969-bc77-731e9951f0e9","Type":"ContainerStarted","Data":"a73f964bf536e162296526a8a23b1d86d428319c03309f41de97dde6f30a22c0"} Nov 27 17:53:38 crc kubenswrapper[4792]: I1127 17:53:38.617937 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vhqbd" podStartSLOduration=4.01841337 podStartE2EDuration="11.617911089s" podCreationTimestamp="2025-11-27 17:53:27 +0000 UTC" firstStartedPulling="2025-11-27 17:53:29.467906988 +0000 UTC m=+2631.810733306" lastFinishedPulling="2025-11-27 17:53:37.067404707 +0000 UTC m=+2639.410231025" observedRunningTime="2025-11-27 17:53:38.604821544 +0000 UTC m=+2640.947647872" watchObservedRunningTime="2025-11-27 17:53:38.617911089 +0000 UTC m=+2640.960737407" Nov 27 17:53:48 crc kubenswrapper[4792]: I1127 17:53:48.103586 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vhqbd" Nov 27 17:53:48 crc kubenswrapper[4792]: I1127 17:53:48.104217 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vhqbd" Nov 27 17:53:48 crc kubenswrapper[4792]: I1127 17:53:48.166509 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vhqbd" Nov 27 17:53:48 crc kubenswrapper[4792]: I1127 17:53:48.766789 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vhqbd" Nov 27 17:53:48 crc kubenswrapper[4792]: I1127 17:53:48.872812 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vhqbd"] Nov 27 17:53:48 crc kubenswrapper[4792]: I1127 17:53:48.919883 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pf2x7"] Nov 27 17:53:48 crc kubenswrapper[4792]: I1127 17:53:48.920221 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pf2x7" podUID="bb471a97-f4b1-488a-99f2-35df6686cd45" containerName="registry-server" containerID="cri-o://177cfbab5f4cb3958bd6db61ad3b86aa2bc76ba2d644dbae236dd75c7efad7d0" gracePeriod=2 Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.469524 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.624504 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bdfk\" (UniqueName: \"kubernetes.io/projected/bb471a97-f4b1-488a-99f2-35df6686cd45-kube-api-access-8bdfk\") pod \"bb471a97-f4b1-488a-99f2-35df6686cd45\" (UID: \"bb471a97-f4b1-488a-99f2-35df6686cd45\") " Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.624905 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb471a97-f4b1-488a-99f2-35df6686cd45-utilities\") pod \"bb471a97-f4b1-488a-99f2-35df6686cd45\" (UID: \"bb471a97-f4b1-488a-99f2-35df6686cd45\") " Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.625140 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb471a97-f4b1-488a-99f2-35df6686cd45-catalog-content\") pod \"bb471a97-f4b1-488a-99f2-35df6686cd45\" (UID: \"bb471a97-f4b1-488a-99f2-35df6686cd45\") " Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.628989 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb471a97-f4b1-488a-99f2-35df6686cd45-utilities" (OuterVolumeSpecName: "utilities") pod "bb471a97-f4b1-488a-99f2-35df6686cd45" (UID: "bb471a97-f4b1-488a-99f2-35df6686cd45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.634718 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb471a97-f4b1-488a-99f2-35df6686cd45-kube-api-access-8bdfk" (OuterVolumeSpecName: "kube-api-access-8bdfk") pod "bb471a97-f4b1-488a-99f2-35df6686cd45" (UID: "bb471a97-f4b1-488a-99f2-35df6686cd45"). InnerVolumeSpecName "kube-api-access-8bdfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.680223 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb471a97-f4b1-488a-99f2-35df6686cd45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb471a97-f4b1-488a-99f2-35df6686cd45" (UID: "bb471a97-f4b1-488a-99f2-35df6686cd45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.707705 4792 generic.go:334] "Generic (PLEG): container finished" podID="bb471a97-f4b1-488a-99f2-35df6686cd45" containerID="177cfbab5f4cb3958bd6db61ad3b86aa2bc76ba2d644dbae236dd75c7efad7d0" exitCode=0 Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.708795 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pf2x7" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.709271 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf2x7" event={"ID":"bb471a97-f4b1-488a-99f2-35df6686cd45","Type":"ContainerDied","Data":"177cfbab5f4cb3958bd6db61ad3b86aa2bc76ba2d644dbae236dd75c7efad7d0"} Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.709304 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pf2x7" event={"ID":"bb471a97-f4b1-488a-99f2-35df6686cd45","Type":"ContainerDied","Data":"8b1bbcc8df2c7149bcb7496363abd0b3168922b92b34a5866ebcf1875875de1f"} Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.709320 4792 scope.go:117] "RemoveContainer" containerID="177cfbab5f4cb3958bd6db61ad3b86aa2bc76ba2d644dbae236dd75c7efad7d0" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.728441 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb471a97-f4b1-488a-99f2-35df6686cd45-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.728470 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bdfk\" (UniqueName: \"kubernetes.io/projected/bb471a97-f4b1-488a-99f2-35df6686cd45-kube-api-access-8bdfk\") on node \"crc\" DevicePath \"\"" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.728480 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb471a97-f4b1-488a-99f2-35df6686cd45-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.739178 4792 scope.go:117] "RemoveContainer" containerID="9c3a9f331d51efc4dab4b100dabefbcfbda08abb48144bebf3a8d5aa6b030b3a" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.746596 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pf2x7"] Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.769499 4792 scope.go:117] "RemoveContainer" containerID="8d2716789e1595fbe61a37722ab416c9e5a09c3c7d4d595445c5bca4b4a5179a" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.776093 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pf2x7"] Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.837412 4792 scope.go:117] "RemoveContainer" containerID="177cfbab5f4cb3958bd6db61ad3b86aa2bc76ba2d644dbae236dd75c7efad7d0" Nov 27 17:53:49 crc kubenswrapper[4792]: E1127 17:53:49.838030 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"177cfbab5f4cb3958bd6db61ad3b86aa2bc76ba2d644dbae236dd75c7efad7d0\": container with ID starting with 177cfbab5f4cb3958bd6db61ad3b86aa2bc76ba2d644dbae236dd75c7efad7d0 not found: ID does not exist" containerID="177cfbab5f4cb3958bd6db61ad3b86aa2bc76ba2d644dbae236dd75c7efad7d0" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.838065 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177cfbab5f4cb3958bd6db61ad3b86aa2bc76ba2d644dbae236dd75c7efad7d0"} err="failed to get container status \"177cfbab5f4cb3958bd6db61ad3b86aa2bc76ba2d644dbae236dd75c7efad7d0\": rpc error: code = NotFound desc = could not find container \"177cfbab5f4cb3958bd6db61ad3b86aa2bc76ba2d644dbae236dd75c7efad7d0\": container with ID starting with 177cfbab5f4cb3958bd6db61ad3b86aa2bc76ba2d644dbae236dd75c7efad7d0 not found: ID does not exist" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.838086 4792 scope.go:117] "RemoveContainer" containerID="9c3a9f331d51efc4dab4b100dabefbcfbda08abb48144bebf3a8d5aa6b030b3a" Nov 27 17:53:49 crc kubenswrapper[4792]: E1127 17:53:49.838497 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c3a9f331d51efc4dab4b100dabefbcfbda08abb48144bebf3a8d5aa6b030b3a\": container with ID starting with 9c3a9f331d51efc4dab4b100dabefbcfbda08abb48144bebf3a8d5aa6b030b3a not found: ID does not exist" containerID="9c3a9f331d51efc4dab4b100dabefbcfbda08abb48144bebf3a8d5aa6b030b3a" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.838526 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c3a9f331d51efc4dab4b100dabefbcfbda08abb48144bebf3a8d5aa6b030b3a"} err="failed to get container status \"9c3a9f331d51efc4dab4b100dabefbcfbda08abb48144bebf3a8d5aa6b030b3a\": rpc error: code = NotFound desc = could not find container \"9c3a9f331d51efc4dab4b100dabefbcfbda08abb48144bebf3a8d5aa6b030b3a\": container with ID starting with 9c3a9f331d51efc4dab4b100dabefbcfbda08abb48144bebf3a8d5aa6b030b3a not found: ID does not exist" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.838542 4792 scope.go:117] "RemoveContainer" containerID="8d2716789e1595fbe61a37722ab416c9e5a09c3c7d4d595445c5bca4b4a5179a" Nov 27 17:53:49 crc kubenswrapper[4792]: E1127 17:53:49.838784 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d2716789e1595fbe61a37722ab416c9e5a09c3c7d4d595445c5bca4b4a5179a\": container with ID starting with 8d2716789e1595fbe61a37722ab416c9e5a09c3c7d4d595445c5bca4b4a5179a not found: ID does not exist" containerID="8d2716789e1595fbe61a37722ab416c9e5a09c3c7d4d595445c5bca4b4a5179a" Nov 27 17:53:49 crc kubenswrapper[4792]: I1127 17:53:49.838812 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d2716789e1595fbe61a37722ab416c9e5a09c3c7d4d595445c5bca4b4a5179a"} err="failed to get container status \"8d2716789e1595fbe61a37722ab416c9e5a09c3c7d4d595445c5bca4b4a5179a\": rpc error: code = NotFound desc = could not find container \"8d2716789e1595fbe61a37722ab416c9e5a09c3c7d4d595445c5bca4b4a5179a\": container with ID starting with 8d2716789e1595fbe61a37722ab416c9e5a09c3c7d4d595445c5bca4b4a5179a not found: ID does not exist" Nov 27 17:53:50 crc kubenswrapper[4792]: I1127 17:53:50.700733 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb471a97-f4b1-488a-99f2-35df6686cd45" path="/var/lib/kubelet/pods/bb471a97-f4b1-488a-99f2-35df6686cd45/volumes" Nov 27 17:54:08 crc kubenswrapper[4792]: I1127 17:54:08.290990 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:54:08 crc kubenswrapper[4792]: I1127 17:54:08.291554 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:54:38 crc kubenswrapper[4792]: I1127 17:54:38.290127 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:54:38 crc kubenswrapper[4792]: I1127 17:54:38.290715 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:55:08 crc kubenswrapper[4792]: I1127 17:55:08.290137 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:55:08 crc kubenswrapper[4792]: I1127 17:55:08.290666 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:55:08 crc kubenswrapper[4792]: I1127 17:55:08.290719 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:55:08 crc kubenswrapper[4792]: I1127 17:55:08.291617 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"87957b4092fe0fede78669473477b181921b1cf815808011da28af91b060d640"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 17:55:08 crc kubenswrapper[4792]: I1127 17:55:08.291701 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://87957b4092fe0fede78669473477b181921b1cf815808011da28af91b060d640" gracePeriod=600 Nov 27 17:55:08 crc kubenswrapper[4792]: I1127 17:55:08.618223 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="87957b4092fe0fede78669473477b181921b1cf815808011da28af91b060d640" exitCode=0 Nov 27 17:55:08 crc kubenswrapper[4792]: I1127 17:55:08.618285 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"87957b4092fe0fede78669473477b181921b1cf815808011da28af91b060d640"} Nov 27 17:55:08 crc kubenswrapper[4792]: I1127 17:55:08.618508 4792 scope.go:117] "RemoveContainer" containerID="94de09749ec4b585bcdd9c7fbdcee83154e9d194eb04bdcfa445519f466e3d7d" Nov 27 17:55:09 crc kubenswrapper[4792]: I1127 17:55:09.634216 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef"} Nov 27 17:55:52 crc kubenswrapper[4792]: I1127 17:55:52.106042 4792 generic.go:334] "Generic (PLEG): container finished" podID="83d3f635-5c64-4827-a54d-1b21ca1b6570" containerID="b1926882f605deedfe615ccf9ead46c13072c9a0e84463a1cf7527b75a334396" exitCode=0 Nov 27 17:55:52 crc kubenswrapper[4792]: I1127 17:55:52.106135 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" event={"ID":"83d3f635-5c64-4827-a54d-1b21ca1b6570","Type":"ContainerDied","Data":"b1926882f605deedfe615ccf9ead46c13072c9a0e84463a1cf7527b75a334396"} Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.728855 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.894144 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-migration-ssh-key-1\") pod \"83d3f635-5c64-4827-a54d-1b21ca1b6570\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.894424 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-combined-ca-bundle\") pod \"83d3f635-5c64-4827-a54d-1b21ca1b6570\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.894520 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-ssh-key\") pod \"83d3f635-5c64-4827-a54d-1b21ca1b6570\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.894637 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-extra-config-0\") pod \"83d3f635-5c64-4827-a54d-1b21ca1b6570\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.894730 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-cell1-compute-config-0\") pod \"83d3f635-5c64-4827-a54d-1b21ca1b6570\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.894814 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-cell1-compute-config-1\") pod \"83d3f635-5c64-4827-a54d-1b21ca1b6570\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.894911 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-migration-ssh-key-0\") pod \"83d3f635-5c64-4827-a54d-1b21ca1b6570\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.895056 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-inventory\") pod \"83d3f635-5c64-4827-a54d-1b21ca1b6570\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.895152 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ds6r\" (UniqueName: \"kubernetes.io/projected/83d3f635-5c64-4827-a54d-1b21ca1b6570-kube-api-access-6ds6r\") pod \"83d3f635-5c64-4827-a54d-1b21ca1b6570\" (UID: \"83d3f635-5c64-4827-a54d-1b21ca1b6570\") " Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.901256 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d3f635-5c64-4827-a54d-1b21ca1b6570-kube-api-access-6ds6r" (OuterVolumeSpecName: "kube-api-access-6ds6r") pod "83d3f635-5c64-4827-a54d-1b21ca1b6570" (UID: "83d3f635-5c64-4827-a54d-1b21ca1b6570"). InnerVolumeSpecName "kube-api-access-6ds6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.902237 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "83d3f635-5c64-4827-a54d-1b21ca1b6570" (UID: "83d3f635-5c64-4827-a54d-1b21ca1b6570"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.926701 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "83d3f635-5c64-4827-a54d-1b21ca1b6570" (UID: "83d3f635-5c64-4827-a54d-1b21ca1b6570"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.930500 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "83d3f635-5c64-4827-a54d-1b21ca1b6570" (UID: "83d3f635-5c64-4827-a54d-1b21ca1b6570"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.931615 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "83d3f635-5c64-4827-a54d-1b21ca1b6570" (UID: "83d3f635-5c64-4827-a54d-1b21ca1b6570"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.934827 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "83d3f635-5c64-4827-a54d-1b21ca1b6570" (UID: "83d3f635-5c64-4827-a54d-1b21ca1b6570"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.935264 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "83d3f635-5c64-4827-a54d-1b21ca1b6570" (UID: "83d3f635-5c64-4827-a54d-1b21ca1b6570"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.937028 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "83d3f635-5c64-4827-a54d-1b21ca1b6570" (UID: "83d3f635-5c64-4827-a54d-1b21ca1b6570"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.949811 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-inventory" (OuterVolumeSpecName: "inventory") pod "83d3f635-5c64-4827-a54d-1b21ca1b6570" (UID: "83d3f635-5c64-4827-a54d-1b21ca1b6570"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.998876 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.998936 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ds6r\" (UniqueName: \"kubernetes.io/projected/83d3f635-5c64-4827-a54d-1b21ca1b6570-kube-api-access-6ds6r\") on node \"crc\" DevicePath \"\"" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.998953 4792 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.998965 4792 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.998978 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.998991 4792 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.999014 4792 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.999032 4792 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 27 17:55:53 crc kubenswrapper[4792]: I1127 17:55:53.999119 4792 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/83d3f635-5c64-4827-a54d-1b21ca1b6570-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.131749 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" event={"ID":"83d3f635-5c64-4827-a54d-1b21ca1b6570","Type":"ContainerDied","Data":"50429ed0b2becd2f19fa0061a90787f572424e13c88153aef41d0fc1fbd4dabc"} Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.131797 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50429ed0b2becd2f19fa0061a90787f572424e13c88153aef41d0fc1fbd4dabc" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.131816 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-fqdvn" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.264876 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z"] Nov 27 17:55:54 crc kubenswrapper[4792]: E1127 17:55:54.265470 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb471a97-f4b1-488a-99f2-35df6686cd45" containerName="registry-server" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.265494 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb471a97-f4b1-488a-99f2-35df6686cd45" containerName="registry-server" Nov 27 17:55:54 crc kubenswrapper[4792]: E1127 17:55:54.265511 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb471a97-f4b1-488a-99f2-35df6686cd45" containerName="extract-content" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.265526 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb471a97-f4b1-488a-99f2-35df6686cd45" containerName="extract-content" Nov 27 17:55:54 crc kubenswrapper[4792]: E1127 17:55:54.265539 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb471a97-f4b1-488a-99f2-35df6686cd45" containerName="extract-utilities" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.265549 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb471a97-f4b1-488a-99f2-35df6686cd45" containerName="extract-utilities" Nov 27 17:55:54 crc kubenswrapper[4792]: E1127 17:55:54.265578 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d3f635-5c64-4827-a54d-1b21ca1b6570" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.265584 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d3f635-5c64-4827-a54d-1b21ca1b6570" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.265867 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d3f635-5c64-4827-a54d-1b21ca1b6570" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.265898 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb471a97-f4b1-488a-99f2-35df6686cd45" containerName="registry-server" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.266795 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.270595 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.271079 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.272191 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.273953 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.280541 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.285589 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z"] Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.412254 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.412731 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.412983 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krglx\" (UniqueName: \"kubernetes.io/projected/8bfd070a-8c21-4c11-b794-c5410285a701-kube-api-access-krglx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.413113 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.413402 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.413634 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.414129 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.518724 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krglx\" (UniqueName: \"kubernetes.io/projected/8bfd070a-8c21-4c11-b794-c5410285a701-kube-api-access-krglx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.518849 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.518946 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.519004 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.519097 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.519190 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.519275 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.523011 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.523108 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.524567 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.525349 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.527953 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.534176 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.548882 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krglx\" (UniqueName: \"kubernetes.io/projected/8bfd070a-8c21-4c11-b794-c5410285a701-kube-api-access-krglx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:54 crc kubenswrapper[4792]: I1127 17:55:54.593032 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:55:55 crc kubenswrapper[4792]: I1127 17:55:55.217319 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z"] Nov 27 17:55:56 crc kubenswrapper[4792]: I1127 17:55:56.154634 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" event={"ID":"8bfd070a-8c21-4c11-b794-c5410285a701","Type":"ContainerStarted","Data":"0301ec55f685ef85ad771a750cf338da9ba9defdc26412348977df1800bfee46"} Nov 27 17:55:57 crc kubenswrapper[4792]: I1127 17:55:57.168558 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" event={"ID":"8bfd070a-8c21-4c11-b794-c5410285a701","Type":"ContainerStarted","Data":"25b9cea6c9640e09f581bd1ba383f7219d11318295b7253220bc85d39b33ea5e"} Nov 27 17:55:57 crc kubenswrapper[4792]: I1127 17:55:57.194024 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" podStartSLOduration=2.403154175 podStartE2EDuration="3.194004992s" podCreationTimestamp="2025-11-27 17:55:54 +0000 UTC" firstStartedPulling="2025-11-27 17:55:55.221455611 +0000 UTC m=+2777.564281929" lastFinishedPulling="2025-11-27 17:55:56.012306428 +0000 UTC m=+2778.355132746" observedRunningTime="2025-11-27 17:55:57.193001327 +0000 UTC m=+2779.535827655" watchObservedRunningTime="2025-11-27 17:55:57.194004992 +0000 UTC m=+2779.536831310" Nov 27 17:56:21 crc kubenswrapper[4792]: I1127 17:56:21.742103 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6br8p"] Nov 27 17:56:21 crc kubenswrapper[4792]: I1127 17:56:21.746612 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:21 crc kubenswrapper[4792]: I1127 17:56:21.756489 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6br8p"] Nov 27 17:56:21 crc kubenswrapper[4792]: I1127 17:56:21.896183 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsvrq\" (UniqueName: \"kubernetes.io/projected/18236168-f55c-469b-b287-32cd38fda8f0-kube-api-access-nsvrq\") pod \"redhat-marketplace-6br8p\" (UID: \"18236168-f55c-469b-b287-32cd38fda8f0\") " pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:21 crc kubenswrapper[4792]: I1127 17:56:21.896260 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18236168-f55c-469b-b287-32cd38fda8f0-catalog-content\") pod \"redhat-marketplace-6br8p\" (UID: \"18236168-f55c-469b-b287-32cd38fda8f0\") " pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:21 crc kubenswrapper[4792]: I1127 17:56:21.896477 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18236168-f55c-469b-b287-32cd38fda8f0-utilities\") pod \"redhat-marketplace-6br8p\" (UID: \"18236168-f55c-469b-b287-32cd38fda8f0\") " pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:21 crc kubenswrapper[4792]: I1127 17:56:21.998867 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsvrq\" (UniqueName: \"kubernetes.io/projected/18236168-f55c-469b-b287-32cd38fda8f0-kube-api-access-nsvrq\") pod \"redhat-marketplace-6br8p\" (UID: \"18236168-f55c-469b-b287-32cd38fda8f0\") " pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:21 crc kubenswrapper[4792]: I1127 17:56:21.999294 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18236168-f55c-469b-b287-32cd38fda8f0-catalog-content\") pod \"redhat-marketplace-6br8p\" (UID: \"18236168-f55c-469b-b287-32cd38fda8f0\") " pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:21 crc kubenswrapper[4792]: I1127 17:56:21.999506 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18236168-f55c-469b-b287-32cd38fda8f0-utilities\") pod \"redhat-marketplace-6br8p\" (UID: \"18236168-f55c-469b-b287-32cd38fda8f0\") " pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:21 crc kubenswrapper[4792]: I1127 17:56:21.999746 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18236168-f55c-469b-b287-32cd38fda8f0-catalog-content\") pod \"redhat-marketplace-6br8p\" (UID: \"18236168-f55c-469b-b287-32cd38fda8f0\") " pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:21 crc kubenswrapper[4792]: I1127 17:56:21.999905 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18236168-f55c-469b-b287-32cd38fda8f0-utilities\") pod \"redhat-marketplace-6br8p\" (UID: \"18236168-f55c-469b-b287-32cd38fda8f0\") " pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:22 crc kubenswrapper[4792]: I1127 17:56:22.026678 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsvrq\" (UniqueName: \"kubernetes.io/projected/18236168-f55c-469b-b287-32cd38fda8f0-kube-api-access-nsvrq\") pod \"redhat-marketplace-6br8p\" (UID: \"18236168-f55c-469b-b287-32cd38fda8f0\") " pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:22 crc kubenswrapper[4792]: I1127 17:56:22.070905 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:22 crc kubenswrapper[4792]: I1127 17:56:22.681514 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6br8p"] Nov 27 17:56:23 crc kubenswrapper[4792]: I1127 17:56:23.529487 4792 generic.go:334] "Generic (PLEG): container finished" podID="18236168-f55c-469b-b287-32cd38fda8f0" containerID="6330c6631550f0d7af1c143dac4214002b6af58ef3668d29b360242b8d39a24f" exitCode=0 Nov 27 17:56:23 crc kubenswrapper[4792]: I1127 17:56:23.530137 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6br8p" event={"ID":"18236168-f55c-469b-b287-32cd38fda8f0","Type":"ContainerDied","Data":"6330c6631550f0d7af1c143dac4214002b6af58ef3668d29b360242b8d39a24f"} Nov 27 17:56:23 crc kubenswrapper[4792]: I1127 17:56:23.530165 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6br8p" event={"ID":"18236168-f55c-469b-b287-32cd38fda8f0","Type":"ContainerStarted","Data":"805e7f39cd59388f60e41b5496d23862efdb6c8ca0bc975bc00b9a553d445bb4"} Nov 27 17:56:25 crc kubenswrapper[4792]: I1127 17:56:25.552720 4792 generic.go:334] "Generic (PLEG): container finished" podID="18236168-f55c-469b-b287-32cd38fda8f0" containerID="aac6872ce683507c11dc3225c2a502a55a9c80a36266a878ee40c0210a5fd44a" exitCode=0 Nov 27 17:56:25 crc kubenswrapper[4792]: I1127 17:56:25.552872 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6br8p" event={"ID":"18236168-f55c-469b-b287-32cd38fda8f0","Type":"ContainerDied","Data":"aac6872ce683507c11dc3225c2a502a55a9c80a36266a878ee40c0210a5fd44a"} Nov 27 17:56:26 crc kubenswrapper[4792]: I1127 17:56:26.321256 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nppf8"] Nov 27 17:56:26 crc kubenswrapper[4792]: I1127 17:56:26.324363 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:26 crc kubenswrapper[4792]: I1127 17:56:26.336238 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nppf8"] Nov 27 17:56:26 crc kubenswrapper[4792]: I1127 17:56:26.428823 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb422521-b02f-413a-a9c7-69549b41bb8b-catalog-content\") pod \"redhat-operators-nppf8\" (UID: \"fb422521-b02f-413a-a9c7-69549b41bb8b\") " pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:26 crc kubenswrapper[4792]: I1127 17:56:26.429117 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk5r5\" (UniqueName: \"kubernetes.io/projected/fb422521-b02f-413a-a9c7-69549b41bb8b-kube-api-access-hk5r5\") pod \"redhat-operators-nppf8\" (UID: \"fb422521-b02f-413a-a9c7-69549b41bb8b\") " pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:26 crc kubenswrapper[4792]: I1127 17:56:26.429542 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb422521-b02f-413a-a9c7-69549b41bb8b-utilities\") pod \"redhat-operators-nppf8\" (UID: \"fb422521-b02f-413a-a9c7-69549b41bb8b\") " pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:26 crc kubenswrapper[4792]: I1127 17:56:26.531767 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb422521-b02f-413a-a9c7-69549b41bb8b-catalog-content\") pod \"redhat-operators-nppf8\" (UID: \"fb422521-b02f-413a-a9c7-69549b41bb8b\") " pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:26 crc kubenswrapper[4792]: I1127 17:56:26.532143 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk5r5\" (UniqueName: \"kubernetes.io/projected/fb422521-b02f-413a-a9c7-69549b41bb8b-kube-api-access-hk5r5\") pod \"redhat-operators-nppf8\" (UID: \"fb422521-b02f-413a-a9c7-69549b41bb8b\") " pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:26 crc kubenswrapper[4792]: I1127 17:56:26.532358 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb422521-b02f-413a-a9c7-69549b41bb8b-catalog-content\") pod \"redhat-operators-nppf8\" (UID: \"fb422521-b02f-413a-a9c7-69549b41bb8b\") " pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:26 crc kubenswrapper[4792]: I1127 17:56:26.532380 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb422521-b02f-413a-a9c7-69549b41bb8b-utilities\") pod \"redhat-operators-nppf8\" (UID: \"fb422521-b02f-413a-a9c7-69549b41bb8b\") " pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:26 crc kubenswrapper[4792]: I1127 17:56:26.532850 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb422521-b02f-413a-a9c7-69549b41bb8b-utilities\") pod \"redhat-operators-nppf8\" (UID: \"fb422521-b02f-413a-a9c7-69549b41bb8b\") " pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:26 crc kubenswrapper[4792]: I1127 17:56:26.559244 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk5r5\" (UniqueName: \"kubernetes.io/projected/fb422521-b02f-413a-a9c7-69549b41bb8b-kube-api-access-hk5r5\") pod \"redhat-operators-nppf8\" (UID: \"fb422521-b02f-413a-a9c7-69549b41bb8b\") " pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:26 crc kubenswrapper[4792]: I1127 17:56:26.567305 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6br8p" event={"ID":"18236168-f55c-469b-b287-32cd38fda8f0","Type":"ContainerStarted","Data":"3edd76aaed95792f52c782eb505cd41f1616b3445b2d702ba0951f5f49c2196b"} Nov 27 17:56:26 crc kubenswrapper[4792]: I1127 17:56:26.591289 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6br8p" podStartSLOduration=3.113882384 podStartE2EDuration="5.591266986s" podCreationTimestamp="2025-11-27 17:56:21 +0000 UTC" firstStartedPulling="2025-11-27 17:56:23.532741888 +0000 UTC m=+2805.875568206" lastFinishedPulling="2025-11-27 17:56:26.01012649 +0000 UTC m=+2808.352952808" observedRunningTime="2025-11-27 17:56:26.58699766 +0000 UTC m=+2808.929823978" watchObservedRunningTime="2025-11-27 17:56:26.591266986 +0000 UTC m=+2808.934093304" Nov 27 17:56:26 crc kubenswrapper[4792]: I1127 17:56:26.654804 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:27 crc kubenswrapper[4792]: I1127 17:56:27.238170 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nppf8"] Nov 27 17:56:27 crc kubenswrapper[4792]: W1127 17:56:27.250209 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb422521_b02f_413a_a9c7_69549b41bb8b.slice/crio-2c83fb0a3426cf4cf02d0fae73fb74f8178875d570a5e6fe1eff835500d8c96e WatchSource:0}: Error finding container 2c83fb0a3426cf4cf02d0fae73fb74f8178875d570a5e6fe1eff835500d8c96e: Status 404 returned error can't find the container with id 2c83fb0a3426cf4cf02d0fae73fb74f8178875d570a5e6fe1eff835500d8c96e Nov 27 17:56:27 crc kubenswrapper[4792]: I1127 17:56:27.581341 4792 generic.go:334] "Generic (PLEG): container finished" podID="fb422521-b02f-413a-a9c7-69549b41bb8b" containerID="98edb315bc4fb1e5fa1697f46b44e0bf689ebd21b315f09d378ecdad92bf2018" exitCode=0 Nov 27 17:56:27 crc kubenswrapper[4792]: I1127 17:56:27.581457 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nppf8" event={"ID":"fb422521-b02f-413a-a9c7-69549b41bb8b","Type":"ContainerDied","Data":"98edb315bc4fb1e5fa1697f46b44e0bf689ebd21b315f09d378ecdad92bf2018"} Nov 27 17:56:27 crc kubenswrapper[4792]: I1127 17:56:27.581501 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nppf8" event={"ID":"fb422521-b02f-413a-a9c7-69549b41bb8b","Type":"ContainerStarted","Data":"2c83fb0a3426cf4cf02d0fae73fb74f8178875d570a5e6fe1eff835500d8c96e"} Nov 27 17:56:28 crc kubenswrapper[4792]: I1127 17:56:28.594163 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nppf8" event={"ID":"fb422521-b02f-413a-a9c7-69549b41bb8b","Type":"ContainerStarted","Data":"6f0d38677d2605280849e65052f3e35286881d5b19e0e945d5da17fdb93023fa"} Nov 27 17:56:32 crc kubenswrapper[4792]: I1127 17:56:32.071427 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:32 crc kubenswrapper[4792]: I1127 17:56:32.072012 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:33 crc kubenswrapper[4792]: I1127 17:56:33.238280 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-6br8p" podUID="18236168-f55c-469b-b287-32cd38fda8f0" containerName="registry-server" probeResult="failure" output=< Nov 27 17:56:33 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 17:56:33 crc kubenswrapper[4792]: > Nov 27 17:56:37 crc kubenswrapper[4792]: I1127 17:56:37.695351 4792 generic.go:334] "Generic (PLEG): container finished" podID="fb422521-b02f-413a-a9c7-69549b41bb8b" containerID="6f0d38677d2605280849e65052f3e35286881d5b19e0e945d5da17fdb93023fa" exitCode=0 Nov 27 17:56:37 crc kubenswrapper[4792]: I1127 17:56:37.695533 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nppf8" event={"ID":"fb422521-b02f-413a-a9c7-69549b41bb8b","Type":"ContainerDied","Data":"6f0d38677d2605280849e65052f3e35286881d5b19e0e945d5da17fdb93023fa"} Nov 27 17:56:38 crc kubenswrapper[4792]: I1127 17:56:38.741632 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nppf8" event={"ID":"fb422521-b02f-413a-a9c7-69549b41bb8b","Type":"ContainerStarted","Data":"83b6228fc197dd33f46e4f34ae021ce8b9eb3f5282ce8d9ddfd0a25ab8e32f25"} Nov 27 17:56:38 crc kubenswrapper[4792]: I1127 17:56:38.769524 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nppf8" podStartSLOduration=2.254367758 podStartE2EDuration="12.769481372s" podCreationTimestamp="2025-11-27 17:56:26 +0000 UTC" firstStartedPulling="2025-11-27 17:56:27.583822516 +0000 UTC m=+2809.926648824" lastFinishedPulling="2025-11-27 17:56:38.09893612 +0000 UTC m=+2820.441762438" observedRunningTime="2025-11-27 17:56:38.761186287 +0000 UTC m=+2821.104012605" watchObservedRunningTime="2025-11-27 17:56:38.769481372 +0000 UTC m=+2821.112307690" Nov 27 17:56:42 crc kubenswrapper[4792]: I1127 17:56:42.124474 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:42 crc kubenswrapper[4792]: I1127 17:56:42.174478 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:42 crc kubenswrapper[4792]: I1127 17:56:42.370195 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6br8p"] Nov 27 17:56:43 crc kubenswrapper[4792]: I1127 17:56:43.290175 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6br8p" podUID="18236168-f55c-469b-b287-32cd38fda8f0" containerName="registry-server" containerID="cri-o://3edd76aaed95792f52c782eb505cd41f1616b3445b2d702ba0951f5f49c2196b" gracePeriod=2 Nov 27 17:56:43 crc kubenswrapper[4792]: I1127 17:56:43.826678 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:43 crc kubenswrapper[4792]: I1127 17:56:43.972636 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18236168-f55c-469b-b287-32cd38fda8f0-catalog-content\") pod \"18236168-f55c-469b-b287-32cd38fda8f0\" (UID: \"18236168-f55c-469b-b287-32cd38fda8f0\") " Nov 27 17:56:43 crc kubenswrapper[4792]: I1127 17:56:43.972783 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18236168-f55c-469b-b287-32cd38fda8f0-utilities\") pod \"18236168-f55c-469b-b287-32cd38fda8f0\" (UID: \"18236168-f55c-469b-b287-32cd38fda8f0\") " Nov 27 17:56:43 crc kubenswrapper[4792]: I1127 17:56:43.972961 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsvrq\" (UniqueName: \"kubernetes.io/projected/18236168-f55c-469b-b287-32cd38fda8f0-kube-api-access-nsvrq\") pod \"18236168-f55c-469b-b287-32cd38fda8f0\" (UID: \"18236168-f55c-469b-b287-32cd38fda8f0\") " Nov 27 17:56:43 crc kubenswrapper[4792]: I1127 17:56:43.973562 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18236168-f55c-469b-b287-32cd38fda8f0-utilities" (OuterVolumeSpecName: "utilities") pod "18236168-f55c-469b-b287-32cd38fda8f0" (UID: "18236168-f55c-469b-b287-32cd38fda8f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:56:43 crc kubenswrapper[4792]: I1127 17:56:43.991176 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18236168-f55c-469b-b287-32cd38fda8f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18236168-f55c-469b-b287-32cd38fda8f0" (UID: "18236168-f55c-469b-b287-32cd38fda8f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.008532 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18236168-f55c-469b-b287-32cd38fda8f0-kube-api-access-nsvrq" (OuterVolumeSpecName: "kube-api-access-nsvrq") pod "18236168-f55c-469b-b287-32cd38fda8f0" (UID: "18236168-f55c-469b-b287-32cd38fda8f0"). InnerVolumeSpecName "kube-api-access-nsvrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.078006 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18236168-f55c-469b-b287-32cd38fda8f0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.078075 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18236168-f55c-469b-b287-32cd38fda8f0-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.078101 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsvrq\" (UniqueName: \"kubernetes.io/projected/18236168-f55c-469b-b287-32cd38fda8f0-kube-api-access-nsvrq\") on node \"crc\" DevicePath \"\"" Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.301429 4792 generic.go:334] "Generic (PLEG): container finished" podID="18236168-f55c-469b-b287-32cd38fda8f0" containerID="3edd76aaed95792f52c782eb505cd41f1616b3445b2d702ba0951f5f49c2196b" exitCode=0 Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.301471 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6br8p" event={"ID":"18236168-f55c-469b-b287-32cd38fda8f0","Type":"ContainerDied","Data":"3edd76aaed95792f52c782eb505cd41f1616b3445b2d702ba0951f5f49c2196b"} Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.301498 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6br8p" event={"ID":"18236168-f55c-469b-b287-32cd38fda8f0","Type":"ContainerDied","Data":"805e7f39cd59388f60e41b5496d23862efdb6c8ca0bc975bc00b9a553d445bb4"} Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.301516 4792 scope.go:117] "RemoveContainer" containerID="3edd76aaed95792f52c782eb505cd41f1616b3445b2d702ba0951f5f49c2196b" Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.301972 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6br8p" Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.336564 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6br8p"] Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.343529 4792 scope.go:117] "RemoveContainer" containerID="aac6872ce683507c11dc3225c2a502a55a9c80a36266a878ee40c0210a5fd44a" Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.348150 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6br8p"] Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.379509 4792 scope.go:117] "RemoveContainer" containerID="6330c6631550f0d7af1c143dac4214002b6af58ef3668d29b360242b8d39a24f" Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.440068 4792 scope.go:117] "RemoveContainer" containerID="3edd76aaed95792f52c782eb505cd41f1616b3445b2d702ba0951f5f49c2196b" Nov 27 17:56:44 crc kubenswrapper[4792]: E1127 17:56:44.441409 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3edd76aaed95792f52c782eb505cd41f1616b3445b2d702ba0951f5f49c2196b\": container with ID starting with 3edd76aaed95792f52c782eb505cd41f1616b3445b2d702ba0951f5f49c2196b not found: ID does not exist" containerID="3edd76aaed95792f52c782eb505cd41f1616b3445b2d702ba0951f5f49c2196b" Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.441470 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3edd76aaed95792f52c782eb505cd41f1616b3445b2d702ba0951f5f49c2196b"} err="failed to get container status \"3edd76aaed95792f52c782eb505cd41f1616b3445b2d702ba0951f5f49c2196b\": rpc error: code = NotFound desc = could not find container \"3edd76aaed95792f52c782eb505cd41f1616b3445b2d702ba0951f5f49c2196b\": container with ID starting with 3edd76aaed95792f52c782eb505cd41f1616b3445b2d702ba0951f5f49c2196b not found: ID does not exist" Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.442105 4792 scope.go:117] "RemoveContainer" containerID="aac6872ce683507c11dc3225c2a502a55a9c80a36266a878ee40c0210a5fd44a" Nov 27 17:56:44 crc kubenswrapper[4792]: E1127 17:56:44.442636 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aac6872ce683507c11dc3225c2a502a55a9c80a36266a878ee40c0210a5fd44a\": container with ID starting with aac6872ce683507c11dc3225c2a502a55a9c80a36266a878ee40c0210a5fd44a not found: ID does not exist" containerID="aac6872ce683507c11dc3225c2a502a55a9c80a36266a878ee40c0210a5fd44a" Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.442718 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac6872ce683507c11dc3225c2a502a55a9c80a36266a878ee40c0210a5fd44a"} err="failed to get container status \"aac6872ce683507c11dc3225c2a502a55a9c80a36266a878ee40c0210a5fd44a\": rpc error: code = NotFound desc = could not find container \"aac6872ce683507c11dc3225c2a502a55a9c80a36266a878ee40c0210a5fd44a\": container with ID starting with aac6872ce683507c11dc3225c2a502a55a9c80a36266a878ee40c0210a5fd44a not found: ID does not exist" Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.442760 4792 scope.go:117] "RemoveContainer" containerID="6330c6631550f0d7af1c143dac4214002b6af58ef3668d29b360242b8d39a24f" Nov 27 17:56:44 crc kubenswrapper[4792]: E1127 17:56:44.443125 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6330c6631550f0d7af1c143dac4214002b6af58ef3668d29b360242b8d39a24f\": container with ID starting with 6330c6631550f0d7af1c143dac4214002b6af58ef3668d29b360242b8d39a24f not found: ID does not exist" containerID="6330c6631550f0d7af1c143dac4214002b6af58ef3668d29b360242b8d39a24f" Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.443207 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6330c6631550f0d7af1c143dac4214002b6af58ef3668d29b360242b8d39a24f"} err="failed to get container status \"6330c6631550f0d7af1c143dac4214002b6af58ef3668d29b360242b8d39a24f\": rpc error: code = NotFound desc = could not find container \"6330c6631550f0d7af1c143dac4214002b6af58ef3668d29b360242b8d39a24f\": container with ID starting with 6330c6631550f0d7af1c143dac4214002b6af58ef3668d29b360242b8d39a24f not found: ID does not exist" Nov 27 17:56:44 crc kubenswrapper[4792]: I1127 17:56:44.701387 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18236168-f55c-469b-b287-32cd38fda8f0" path="/var/lib/kubelet/pods/18236168-f55c-469b-b287-32cd38fda8f0/volumes" Nov 27 17:56:46 crc kubenswrapper[4792]: I1127 17:56:46.654917 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:46 crc kubenswrapper[4792]: I1127 17:56:46.655272 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:47 crc kubenswrapper[4792]: I1127 17:56:47.730123 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nppf8" podUID="fb422521-b02f-413a-a9c7-69549b41bb8b" containerName="registry-server" probeResult="failure" output=< Nov 27 17:56:47 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 17:56:47 crc kubenswrapper[4792]: > Nov 27 17:56:56 crc kubenswrapper[4792]: I1127 17:56:56.708866 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:56 crc kubenswrapper[4792]: I1127 17:56:56.767306 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:56 crc kubenswrapper[4792]: I1127 17:56:56.954313 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nppf8"] Nov 27 17:56:58 crc kubenswrapper[4792]: I1127 17:56:58.446422 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nppf8" podUID="fb422521-b02f-413a-a9c7-69549b41bb8b" containerName="registry-server" containerID="cri-o://83b6228fc197dd33f46e4f34ae021ce8b9eb3f5282ce8d9ddfd0a25ab8e32f25" gracePeriod=2 Nov 27 17:56:58 crc kubenswrapper[4792]: I1127 17:56:58.953502 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.056564 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb422521-b02f-413a-a9c7-69549b41bb8b-catalog-content\") pod \"fb422521-b02f-413a-a9c7-69549b41bb8b\" (UID: \"fb422521-b02f-413a-a9c7-69549b41bb8b\") " Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.056703 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb422521-b02f-413a-a9c7-69549b41bb8b-utilities\") pod \"fb422521-b02f-413a-a9c7-69549b41bb8b\" (UID: \"fb422521-b02f-413a-a9c7-69549b41bb8b\") " Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.056987 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk5r5\" (UniqueName: \"kubernetes.io/projected/fb422521-b02f-413a-a9c7-69549b41bb8b-kube-api-access-hk5r5\") pod \"fb422521-b02f-413a-a9c7-69549b41bb8b\" (UID: \"fb422521-b02f-413a-a9c7-69549b41bb8b\") " Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.057638 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb422521-b02f-413a-a9c7-69549b41bb8b-utilities" (OuterVolumeSpecName: "utilities") pod "fb422521-b02f-413a-a9c7-69549b41bb8b" (UID: "fb422521-b02f-413a-a9c7-69549b41bb8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.064898 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb422521-b02f-413a-a9c7-69549b41bb8b-kube-api-access-hk5r5" (OuterVolumeSpecName: "kube-api-access-hk5r5") pod "fb422521-b02f-413a-a9c7-69549b41bb8b" (UID: "fb422521-b02f-413a-a9c7-69549b41bb8b"). InnerVolumeSpecName "kube-api-access-hk5r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.159439 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk5r5\" (UniqueName: \"kubernetes.io/projected/fb422521-b02f-413a-a9c7-69549b41bb8b-kube-api-access-hk5r5\") on node \"crc\" DevicePath \"\"" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.159466 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb422521-b02f-413a-a9c7-69549b41bb8b-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.173298 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb422521-b02f-413a-a9c7-69549b41bb8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb422521-b02f-413a-a9c7-69549b41bb8b" (UID: "fb422521-b02f-413a-a9c7-69549b41bb8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.261989 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb422521-b02f-413a-a9c7-69549b41bb8b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.457612 4792 generic.go:334] "Generic (PLEG): container finished" podID="fb422521-b02f-413a-a9c7-69549b41bb8b" containerID="83b6228fc197dd33f46e4f34ae021ce8b9eb3f5282ce8d9ddfd0a25ab8e32f25" exitCode=0 Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.457674 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nppf8" event={"ID":"fb422521-b02f-413a-a9c7-69549b41bb8b","Type":"ContainerDied","Data":"83b6228fc197dd33f46e4f34ae021ce8b9eb3f5282ce8d9ddfd0a25ab8e32f25"} Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.457710 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nppf8" event={"ID":"fb422521-b02f-413a-a9c7-69549b41bb8b","Type":"ContainerDied","Data":"2c83fb0a3426cf4cf02d0fae73fb74f8178875d570a5e6fe1eff835500d8c96e"} Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.457731 4792 scope.go:117] "RemoveContainer" containerID="83b6228fc197dd33f46e4f34ae021ce8b9eb3f5282ce8d9ddfd0a25ab8e32f25" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.457806 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nppf8" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.493668 4792 scope.go:117] "RemoveContainer" containerID="6f0d38677d2605280849e65052f3e35286881d5b19e0e945d5da17fdb93023fa" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.500840 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nppf8"] Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.514970 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nppf8"] Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.518340 4792 scope.go:117] "RemoveContainer" containerID="98edb315bc4fb1e5fa1697f46b44e0bf689ebd21b315f09d378ecdad92bf2018" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.590833 4792 scope.go:117] "RemoveContainer" containerID="83b6228fc197dd33f46e4f34ae021ce8b9eb3f5282ce8d9ddfd0a25ab8e32f25" Nov 27 17:56:59 crc kubenswrapper[4792]: E1127 17:56:59.591348 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83b6228fc197dd33f46e4f34ae021ce8b9eb3f5282ce8d9ddfd0a25ab8e32f25\": container with ID starting with 83b6228fc197dd33f46e4f34ae021ce8b9eb3f5282ce8d9ddfd0a25ab8e32f25 not found: ID does not exist" containerID="83b6228fc197dd33f46e4f34ae021ce8b9eb3f5282ce8d9ddfd0a25ab8e32f25" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.591387 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83b6228fc197dd33f46e4f34ae021ce8b9eb3f5282ce8d9ddfd0a25ab8e32f25"} err="failed to get container status \"83b6228fc197dd33f46e4f34ae021ce8b9eb3f5282ce8d9ddfd0a25ab8e32f25\": rpc error: code = NotFound desc = could not find container \"83b6228fc197dd33f46e4f34ae021ce8b9eb3f5282ce8d9ddfd0a25ab8e32f25\": container with ID starting with 83b6228fc197dd33f46e4f34ae021ce8b9eb3f5282ce8d9ddfd0a25ab8e32f25 not found: ID does not exist" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.591408 4792 scope.go:117] "RemoveContainer" containerID="6f0d38677d2605280849e65052f3e35286881d5b19e0e945d5da17fdb93023fa" Nov 27 17:56:59 crc kubenswrapper[4792]: E1127 17:56:59.591716 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f0d38677d2605280849e65052f3e35286881d5b19e0e945d5da17fdb93023fa\": container with ID starting with 6f0d38677d2605280849e65052f3e35286881d5b19e0e945d5da17fdb93023fa not found: ID does not exist" containerID="6f0d38677d2605280849e65052f3e35286881d5b19e0e945d5da17fdb93023fa" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.591740 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f0d38677d2605280849e65052f3e35286881d5b19e0e945d5da17fdb93023fa"} err="failed to get container status \"6f0d38677d2605280849e65052f3e35286881d5b19e0e945d5da17fdb93023fa\": rpc error: code = NotFound desc = could not find container \"6f0d38677d2605280849e65052f3e35286881d5b19e0e945d5da17fdb93023fa\": container with ID starting with 6f0d38677d2605280849e65052f3e35286881d5b19e0e945d5da17fdb93023fa not found: ID does not exist" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.591754 4792 scope.go:117] "RemoveContainer" containerID="98edb315bc4fb1e5fa1697f46b44e0bf689ebd21b315f09d378ecdad92bf2018" Nov 27 17:56:59 crc kubenswrapper[4792]: E1127 17:56:59.591991 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98edb315bc4fb1e5fa1697f46b44e0bf689ebd21b315f09d378ecdad92bf2018\": container with ID starting with 98edb315bc4fb1e5fa1697f46b44e0bf689ebd21b315f09d378ecdad92bf2018 not found: ID does not exist" containerID="98edb315bc4fb1e5fa1697f46b44e0bf689ebd21b315f09d378ecdad92bf2018" Nov 27 17:56:59 crc kubenswrapper[4792]: I1127 17:56:59.592012 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98edb315bc4fb1e5fa1697f46b44e0bf689ebd21b315f09d378ecdad92bf2018"} err="failed to get container status \"98edb315bc4fb1e5fa1697f46b44e0bf689ebd21b315f09d378ecdad92bf2018\": rpc error: code = NotFound desc = could not find container \"98edb315bc4fb1e5fa1697f46b44e0bf689ebd21b315f09d378ecdad92bf2018\": container with ID starting with 98edb315bc4fb1e5fa1697f46b44e0bf689ebd21b315f09d378ecdad92bf2018 not found: ID does not exist" Nov 27 17:57:00 crc kubenswrapper[4792]: I1127 17:57:00.706293 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb422521-b02f-413a-a9c7-69549b41bb8b" path="/var/lib/kubelet/pods/fb422521-b02f-413a-a9c7-69549b41bb8b/volumes" Nov 27 17:57:08 crc kubenswrapper[4792]: I1127 17:57:08.291192 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:57:08 crc kubenswrapper[4792]: I1127 17:57:08.291859 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:57:38 crc kubenswrapper[4792]: I1127 17:57:38.290736 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:57:38 crc kubenswrapper[4792]: I1127 17:57:38.291305 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:58:08 crc kubenswrapper[4792]: I1127 17:58:08.290540 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 17:58:08 crc kubenswrapper[4792]: I1127 17:58:08.292254 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 17:58:08 crc kubenswrapper[4792]: I1127 17:58:08.292393 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 17:58:08 crc kubenswrapper[4792]: I1127 17:58:08.293466 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 17:58:08 crc kubenswrapper[4792]: I1127 17:58:08.293638 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" gracePeriod=600 Nov 27 17:58:08 crc kubenswrapper[4792]: E1127 17:58:08.982418 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:58:09 crc kubenswrapper[4792]: I1127 17:58:09.399092 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" exitCode=0 Nov 27 17:58:09 crc kubenswrapper[4792]: I1127 17:58:09.399142 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef"} Nov 27 17:58:09 crc kubenswrapper[4792]: I1127 17:58:09.399182 4792 scope.go:117] "RemoveContainer" containerID="87957b4092fe0fede78669473477b181921b1cf815808011da28af91b060d640" Nov 27 17:58:09 crc kubenswrapper[4792]: I1127 17:58:09.399953 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 17:58:09 crc kubenswrapper[4792]: E1127 17:58:09.400259 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:58:13 crc kubenswrapper[4792]: I1127 17:58:13.447414 4792 generic.go:334] "Generic (PLEG): container finished" podID="8bfd070a-8c21-4c11-b794-c5410285a701" containerID="25b9cea6c9640e09f581bd1ba383f7219d11318295b7253220bc85d39b33ea5e" exitCode=0 Nov 27 17:58:13 crc kubenswrapper[4792]: I1127 17:58:13.447517 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" event={"ID":"8bfd070a-8c21-4c11-b794-c5410285a701","Type":"ContainerDied","Data":"25b9cea6c9640e09f581bd1ba383f7219d11318295b7253220bc85d39b33ea5e"} Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.013424 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.122422 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-2\") pod \"8bfd070a-8c21-4c11-b794-c5410285a701\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.122525 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-0\") pod \"8bfd070a-8c21-4c11-b794-c5410285a701\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.122683 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-telemetry-combined-ca-bundle\") pod \"8bfd070a-8c21-4c11-b794-c5410285a701\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.122731 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-inventory\") pod \"8bfd070a-8c21-4c11-b794-c5410285a701\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.122762 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krglx\" (UniqueName: \"kubernetes.io/projected/8bfd070a-8c21-4c11-b794-c5410285a701-kube-api-access-krglx\") pod \"8bfd070a-8c21-4c11-b794-c5410285a701\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.122846 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-1\") pod \"8bfd070a-8c21-4c11-b794-c5410285a701\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.123005 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ssh-key\") pod \"8bfd070a-8c21-4c11-b794-c5410285a701\" (UID: \"8bfd070a-8c21-4c11-b794-c5410285a701\") " Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.128658 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "8bfd070a-8c21-4c11-b794-c5410285a701" (UID: "8bfd070a-8c21-4c11-b794-c5410285a701"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.131071 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bfd070a-8c21-4c11-b794-c5410285a701-kube-api-access-krglx" (OuterVolumeSpecName: "kube-api-access-krglx") pod "8bfd070a-8c21-4c11-b794-c5410285a701" (UID: "8bfd070a-8c21-4c11-b794-c5410285a701"). InnerVolumeSpecName "kube-api-access-krglx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.158144 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "8bfd070a-8c21-4c11-b794-c5410285a701" (UID: "8bfd070a-8c21-4c11-b794-c5410285a701"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.163270 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "8bfd070a-8c21-4c11-b794-c5410285a701" (UID: "8bfd070a-8c21-4c11-b794-c5410285a701"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.168228 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8bfd070a-8c21-4c11-b794-c5410285a701" (UID: "8bfd070a-8c21-4c11-b794-c5410285a701"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.168462 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "8bfd070a-8c21-4c11-b794-c5410285a701" (UID: "8bfd070a-8c21-4c11-b794-c5410285a701"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.182539 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-inventory" (OuterVolumeSpecName: "inventory") pod "8bfd070a-8c21-4c11-b794-c5410285a701" (UID: "8bfd070a-8c21-4c11-b794-c5410285a701"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.226638 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.226705 4792 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.226720 4792 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.226733 4792 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.226746 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.226759 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krglx\" (UniqueName: \"kubernetes.io/projected/8bfd070a-8c21-4c11-b794-c5410285a701-kube-api-access-krglx\") on node \"crc\" DevicePath \"\"" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.226772 4792 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8bfd070a-8c21-4c11-b794-c5410285a701-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.476188 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" event={"ID":"8bfd070a-8c21-4c11-b794-c5410285a701","Type":"ContainerDied","Data":"0301ec55f685ef85ad771a750cf338da9ba9defdc26412348977df1800bfee46"} Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.476248 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0301ec55f685ef85ad771a750cf338da9ba9defdc26412348977df1800bfee46" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.476223 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.589119 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl"] Nov 27 17:58:15 crc kubenswrapper[4792]: E1127 17:58:15.589745 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb422521-b02f-413a-a9c7-69549b41bb8b" containerName="extract-utilities" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.589770 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb422521-b02f-413a-a9c7-69549b41bb8b" containerName="extract-utilities" Nov 27 17:58:15 crc kubenswrapper[4792]: E1127 17:58:15.589785 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18236168-f55c-469b-b287-32cd38fda8f0" containerName="extract-content" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.589794 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="18236168-f55c-469b-b287-32cd38fda8f0" containerName="extract-content" Nov 27 17:58:15 crc kubenswrapper[4792]: E1127 17:58:15.589810 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb422521-b02f-413a-a9c7-69549b41bb8b" containerName="extract-content" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.589820 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb422521-b02f-413a-a9c7-69549b41bb8b" containerName="extract-content" Nov 27 17:58:15 crc kubenswrapper[4792]: E1127 17:58:15.589852 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18236168-f55c-469b-b287-32cd38fda8f0" containerName="registry-server" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.589862 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="18236168-f55c-469b-b287-32cd38fda8f0" containerName="registry-server" Nov 27 17:58:15 crc kubenswrapper[4792]: E1127 17:58:15.589881 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18236168-f55c-469b-b287-32cd38fda8f0" containerName="extract-utilities" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.589891 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="18236168-f55c-469b-b287-32cd38fda8f0" containerName="extract-utilities" Nov 27 17:58:15 crc kubenswrapper[4792]: E1127 17:58:15.589915 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bfd070a-8c21-4c11-b794-c5410285a701" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.589924 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bfd070a-8c21-4c11-b794-c5410285a701" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 27 17:58:15 crc kubenswrapper[4792]: E1127 17:58:15.589961 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb422521-b02f-413a-a9c7-69549b41bb8b" containerName="registry-server" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.589970 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb422521-b02f-413a-a9c7-69549b41bb8b" containerName="registry-server" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.590285 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb422521-b02f-413a-a9c7-69549b41bb8b" containerName="registry-server" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.590324 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bfd070a-8c21-4c11-b794-c5410285a701" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.590340 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="18236168-f55c-469b-b287-32cd38fda8f0" containerName="registry-server" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.591430 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.599076 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.599166 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.599259 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.602876 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.602937 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.610359 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl"] Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.646461 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzlhd\" (UniqueName: \"kubernetes.io/projected/78842a98-31e3-4f0b-8f35-6b8a1856a994-kube-api-access-kzlhd\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.646635 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.646797 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.646885 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.646943 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.646992 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.647239 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.749589 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.749698 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.749732 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.749763 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.749790 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.749875 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.749998 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzlhd\" (UniqueName: \"kubernetes.io/projected/78842a98-31e3-4f0b-8f35-6b8a1856a994-kube-api-access-kzlhd\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.754447 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.754736 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.755638 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.755758 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.756215 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.764300 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.766440 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzlhd\" (UniqueName: \"kubernetes.io/projected/78842a98-31e3-4f0b-8f35-6b8a1856a994-kube-api-access-kzlhd\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:15 crc kubenswrapper[4792]: I1127 17:58:15.909240 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 17:58:16 crc kubenswrapper[4792]: I1127 17:58:16.452397 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl"] Nov 27 17:58:16 crc kubenswrapper[4792]: I1127 17:58:16.460753 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 17:58:16 crc kubenswrapper[4792]: I1127 17:58:16.491257 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" event={"ID":"78842a98-31e3-4f0b-8f35-6b8a1856a994","Type":"ContainerStarted","Data":"29d710b030b617193ab4c02950da1ce73f34f443a7e301f83b085f67351867b7"} Nov 27 17:58:18 crc kubenswrapper[4792]: I1127 17:58:18.512752 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" event={"ID":"78842a98-31e3-4f0b-8f35-6b8a1856a994","Type":"ContainerStarted","Data":"44d3b0744f3f1892c64ea78426c43d636afb31d5589ab3a43da9f70347da630c"} Nov 27 17:58:18 crc kubenswrapper[4792]: I1127 17:58:18.551149 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" podStartSLOduration=2.813887667 podStartE2EDuration="3.5511312s" podCreationTimestamp="2025-11-27 17:58:15 +0000 UTC" firstStartedPulling="2025-11-27 17:58:16.46039016 +0000 UTC m=+2918.803216478" lastFinishedPulling="2025-11-27 17:58:17.197633693 +0000 UTC m=+2919.540460011" observedRunningTime="2025-11-27 17:58:18.540613029 +0000 UTC m=+2920.883439367" watchObservedRunningTime="2025-11-27 17:58:18.5511312 +0000 UTC m=+2920.893957518" Nov 27 17:58:23 crc kubenswrapper[4792]: I1127 17:58:23.687539 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 17:58:23 crc kubenswrapper[4792]: E1127 17:58:23.688924 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:58:38 crc kubenswrapper[4792]: I1127 17:58:38.721107 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 17:58:38 crc kubenswrapper[4792]: E1127 17:58:38.722109 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:58:50 crc kubenswrapper[4792]: I1127 17:58:50.686736 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 17:58:50 crc kubenswrapper[4792]: E1127 17:58:50.687438 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:59:02 crc kubenswrapper[4792]: I1127 17:59:02.688321 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 17:59:02 crc kubenswrapper[4792]: E1127 17:59:02.689327 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:59:14 crc kubenswrapper[4792]: I1127 17:59:14.686811 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 17:59:14 crc kubenswrapper[4792]: E1127 17:59:14.687624 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:59:25 crc kubenswrapper[4792]: I1127 17:59:25.688071 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 17:59:25 crc kubenswrapper[4792]: E1127 17:59:25.688934 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:59:36 crc kubenswrapper[4792]: I1127 17:59:36.687087 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 17:59:36 crc kubenswrapper[4792]: E1127 17:59:36.687844 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 17:59:49 crc kubenswrapper[4792]: I1127 17:59:49.687606 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 17:59:49 crc kubenswrapper[4792]: E1127 17:59:49.688694 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.174872 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s"] Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.178566 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.182759 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.182773 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.188368 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s"] Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.237231 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/111c652f-251a-4e87-857d-f6c8df36396d-secret-volume\") pod \"collect-profiles-29404440-vfl6s\" (UID: \"111c652f-251a-4e87-857d-f6c8df36396d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.237303 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8vxk\" (UniqueName: \"kubernetes.io/projected/111c652f-251a-4e87-857d-f6c8df36396d-kube-api-access-m8vxk\") pod \"collect-profiles-29404440-vfl6s\" (UID: \"111c652f-251a-4e87-857d-f6c8df36396d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.237405 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/111c652f-251a-4e87-857d-f6c8df36396d-config-volume\") pod \"collect-profiles-29404440-vfl6s\" (UID: \"111c652f-251a-4e87-857d-f6c8df36396d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.340184 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/111c652f-251a-4e87-857d-f6c8df36396d-secret-volume\") pod \"collect-profiles-29404440-vfl6s\" (UID: \"111c652f-251a-4e87-857d-f6c8df36396d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.340232 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8vxk\" (UniqueName: \"kubernetes.io/projected/111c652f-251a-4e87-857d-f6c8df36396d-kube-api-access-m8vxk\") pod \"collect-profiles-29404440-vfl6s\" (UID: \"111c652f-251a-4e87-857d-f6c8df36396d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.340320 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/111c652f-251a-4e87-857d-f6c8df36396d-config-volume\") pod \"collect-profiles-29404440-vfl6s\" (UID: \"111c652f-251a-4e87-857d-f6c8df36396d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.341741 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/111c652f-251a-4e87-857d-f6c8df36396d-config-volume\") pod \"collect-profiles-29404440-vfl6s\" (UID: \"111c652f-251a-4e87-857d-f6c8df36396d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.347082 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/111c652f-251a-4e87-857d-f6c8df36396d-secret-volume\") pod \"collect-profiles-29404440-vfl6s\" (UID: \"111c652f-251a-4e87-857d-f6c8df36396d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.356891 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8vxk\" (UniqueName: \"kubernetes.io/projected/111c652f-251a-4e87-857d-f6c8df36396d-kube-api-access-m8vxk\") pod \"collect-profiles-29404440-vfl6s\" (UID: \"111c652f-251a-4e87-857d-f6c8df36396d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.504515 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" Nov 27 18:00:00 crc kubenswrapper[4792]: I1127 18:00:00.972967 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s"] Nov 27 18:00:01 crc kubenswrapper[4792]: I1127 18:00:01.635609 4792 generic.go:334] "Generic (PLEG): container finished" podID="111c652f-251a-4e87-857d-f6c8df36396d" containerID="9fc7603010df5dd4cf1a90359e034145e062f0f394d5d8e9bc6770cb6a604f18" exitCode=0 Nov 27 18:00:01 crc kubenswrapper[4792]: I1127 18:00:01.635678 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" event={"ID":"111c652f-251a-4e87-857d-f6c8df36396d","Type":"ContainerDied","Data":"9fc7603010df5dd4cf1a90359e034145e062f0f394d5d8e9bc6770cb6a604f18"} Nov 27 18:00:01 crc kubenswrapper[4792]: I1127 18:00:01.635927 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" event={"ID":"111c652f-251a-4e87-857d-f6c8df36396d","Type":"ContainerStarted","Data":"72da76984bc19db96ddc5078e669017f03c776a467e840996a147d544d28c570"} Nov 27 18:00:03 crc kubenswrapper[4792]: I1127 18:00:03.082247 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" Nov 27 18:00:03 crc kubenswrapper[4792]: I1127 18:00:03.213893 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/111c652f-251a-4e87-857d-f6c8df36396d-config-volume\") pod \"111c652f-251a-4e87-857d-f6c8df36396d\" (UID: \"111c652f-251a-4e87-857d-f6c8df36396d\") " Nov 27 18:00:03 crc kubenswrapper[4792]: I1127 18:00:03.214370 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/111c652f-251a-4e87-857d-f6c8df36396d-secret-volume\") pod \"111c652f-251a-4e87-857d-f6c8df36396d\" (UID: \"111c652f-251a-4e87-857d-f6c8df36396d\") " Nov 27 18:00:03 crc kubenswrapper[4792]: I1127 18:00:03.214533 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8vxk\" (UniqueName: \"kubernetes.io/projected/111c652f-251a-4e87-857d-f6c8df36396d-kube-api-access-m8vxk\") pod \"111c652f-251a-4e87-857d-f6c8df36396d\" (UID: \"111c652f-251a-4e87-857d-f6c8df36396d\") " Nov 27 18:00:03 crc kubenswrapper[4792]: I1127 18:00:03.216098 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111c652f-251a-4e87-857d-f6c8df36396d-config-volume" (OuterVolumeSpecName: "config-volume") pod "111c652f-251a-4e87-857d-f6c8df36396d" (UID: "111c652f-251a-4e87-857d-f6c8df36396d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 18:00:03 crc kubenswrapper[4792]: I1127 18:00:03.228364 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/111c652f-251a-4e87-857d-f6c8df36396d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "111c652f-251a-4e87-857d-f6c8df36396d" (UID: "111c652f-251a-4e87-857d-f6c8df36396d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:00:03 crc kubenswrapper[4792]: I1127 18:00:03.228411 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/111c652f-251a-4e87-857d-f6c8df36396d-kube-api-access-m8vxk" (OuterVolumeSpecName: "kube-api-access-m8vxk") pod "111c652f-251a-4e87-857d-f6c8df36396d" (UID: "111c652f-251a-4e87-857d-f6c8df36396d"). InnerVolumeSpecName "kube-api-access-m8vxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:00:03 crc kubenswrapper[4792]: I1127 18:00:03.317473 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8vxk\" (UniqueName: \"kubernetes.io/projected/111c652f-251a-4e87-857d-f6c8df36396d-kube-api-access-m8vxk\") on node \"crc\" DevicePath \"\"" Nov 27 18:00:03 crc kubenswrapper[4792]: I1127 18:00:03.317507 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/111c652f-251a-4e87-857d-f6c8df36396d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 27 18:00:03 crc kubenswrapper[4792]: I1127 18:00:03.317518 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/111c652f-251a-4e87-857d-f6c8df36396d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 27 18:00:03 crc kubenswrapper[4792]: I1127 18:00:03.668034 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" event={"ID":"111c652f-251a-4e87-857d-f6c8df36396d","Type":"ContainerDied","Data":"72da76984bc19db96ddc5078e669017f03c776a467e840996a147d544d28c570"} Nov 27 18:00:03 crc kubenswrapper[4792]: I1127 18:00:03.668082 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72da76984bc19db96ddc5078e669017f03c776a467e840996a147d544d28c570" Nov 27 18:00:03 crc kubenswrapper[4792]: I1127 18:00:03.668448 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s" Nov 27 18:00:03 crc kubenswrapper[4792]: I1127 18:00:03.686748 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:00:03 crc kubenswrapper[4792]: E1127 18:00:03.687396 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:00:04 crc kubenswrapper[4792]: I1127 18:00:04.170717 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6"] Nov 27 18:00:04 crc kubenswrapper[4792]: I1127 18:00:04.190542 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404395-p9nk6"] Nov 27 18:00:04 crc kubenswrapper[4792]: I1127 18:00:04.701405 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e9ee27-f9b2-47ec-bf45-3b97be28e298" path="/var/lib/kubelet/pods/f0e9ee27-f9b2-47ec-bf45-3b97be28e298/volumes" Nov 27 18:00:05 crc kubenswrapper[4792]: I1127 18:00:05.388570 4792 scope.go:117] "RemoveContainer" containerID="ef5051df5fafe3a49c4660f1170b2f981887674de1c9095772d474c7aff08193" Nov 27 18:00:14 crc kubenswrapper[4792]: I1127 18:00:14.688314 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:00:14 crc kubenswrapper[4792]: E1127 18:00:14.689193 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:00:14 crc kubenswrapper[4792]: I1127 18:00:14.811379 4792 generic.go:334] "Generic (PLEG): container finished" podID="78842a98-31e3-4f0b-8f35-6b8a1856a994" containerID="44d3b0744f3f1892c64ea78426c43d636afb31d5589ab3a43da9f70347da630c" exitCode=0 Nov 27 18:00:14 crc kubenswrapper[4792]: I1127 18:00:14.811445 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" event={"ID":"78842a98-31e3-4f0b-8f35-6b8a1856a994","Type":"ContainerDied","Data":"44d3b0744f3f1892c64ea78426c43d636afb31d5589ab3a43da9f70347da630c"} Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.291406 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.434892 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-2\") pod \"78842a98-31e3-4f0b-8f35-6b8a1856a994\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.435819 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-1\") pod \"78842a98-31e3-4f0b-8f35-6b8a1856a994\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.435954 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ssh-key\") pod \"78842a98-31e3-4f0b-8f35-6b8a1856a994\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.436680 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-0\") pod \"78842a98-31e3-4f0b-8f35-6b8a1856a994\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.437001 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-telemetry-power-monitoring-combined-ca-bundle\") pod \"78842a98-31e3-4f0b-8f35-6b8a1856a994\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.437073 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-inventory\") pod \"78842a98-31e3-4f0b-8f35-6b8a1856a994\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.437528 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzlhd\" (UniqueName: \"kubernetes.io/projected/78842a98-31e3-4f0b-8f35-6b8a1856a994-kube-api-access-kzlhd\") pod \"78842a98-31e3-4f0b-8f35-6b8a1856a994\" (UID: \"78842a98-31e3-4f0b-8f35-6b8a1856a994\") " Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.448577 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78842a98-31e3-4f0b-8f35-6b8a1856a994-kube-api-access-kzlhd" (OuterVolumeSpecName: "kube-api-access-kzlhd") pod "78842a98-31e3-4f0b-8f35-6b8a1856a994" (UID: "78842a98-31e3-4f0b-8f35-6b8a1856a994"). InnerVolumeSpecName "kube-api-access-kzlhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.450703 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "78842a98-31e3-4f0b-8f35-6b8a1856a994" (UID: "78842a98-31e3-4f0b-8f35-6b8a1856a994"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.481825 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "78842a98-31e3-4f0b-8f35-6b8a1856a994" (UID: "78842a98-31e3-4f0b-8f35-6b8a1856a994"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.481862 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "78842a98-31e3-4f0b-8f35-6b8a1856a994" (UID: "78842a98-31e3-4f0b-8f35-6b8a1856a994"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.494876 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "78842a98-31e3-4f0b-8f35-6b8a1856a994" (UID: "78842a98-31e3-4f0b-8f35-6b8a1856a994"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.505908 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "78842a98-31e3-4f0b-8f35-6b8a1856a994" (UID: "78842a98-31e3-4f0b-8f35-6b8a1856a994"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.507878 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-inventory" (OuterVolumeSpecName: "inventory") pod "78842a98-31e3-4f0b-8f35-6b8a1856a994" (UID: "78842a98-31e3-4f0b-8f35-6b8a1856a994"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.542754 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzlhd\" (UniqueName: \"kubernetes.io/projected/78842a98-31e3-4f0b-8f35-6b8a1856a994-kube-api-access-kzlhd\") on node \"crc\" DevicePath \"\"" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.542796 4792 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.542811 4792 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.542824 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.542834 4792 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.542844 4792 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.542856 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78842a98-31e3-4f0b-8f35-6b8a1856a994-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.835710 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" event={"ID":"78842a98-31e3-4f0b-8f35-6b8a1856a994","Type":"ContainerDied","Data":"29d710b030b617193ab4c02950da1ce73f34f443a7e301f83b085f67351867b7"} Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.835756 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29d710b030b617193ab4c02950da1ce73f34f443a7e301f83b085f67351867b7" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.835819 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.952623 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g"] Nov 27 18:00:16 crc kubenswrapper[4792]: E1127 18:00:16.953284 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78842a98-31e3-4f0b-8f35-6b8a1856a994" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.953312 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="78842a98-31e3-4f0b-8f35-6b8a1856a994" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 27 18:00:16 crc kubenswrapper[4792]: E1127 18:00:16.953330 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111c652f-251a-4e87-857d-f6c8df36396d" containerName="collect-profiles" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.953348 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="111c652f-251a-4e87-857d-f6c8df36396d" containerName="collect-profiles" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.953714 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="78842a98-31e3-4f0b-8f35-6b8a1856a994" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.953750 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="111c652f-251a-4e87-857d-f6c8df36396d" containerName="collect-profiles" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.954940 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.963201 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.963305 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.963491 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-wspkk" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.963914 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.964484 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 27 18:00:16 crc kubenswrapper[4792]: I1127 18:00:16.965029 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g"] Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.053470 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd7hh\" (UniqueName: \"kubernetes.io/projected/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-kube-api-access-bd7hh\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pjp2g\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.053559 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pjp2g\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.053976 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pjp2g\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.054053 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pjp2g\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.054179 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pjp2g\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.156274 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd7hh\" (UniqueName: \"kubernetes.io/projected/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-kube-api-access-bd7hh\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pjp2g\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.156341 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pjp2g\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.156520 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pjp2g\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.156554 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pjp2g\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.156608 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pjp2g\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.161358 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pjp2g\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.161589 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pjp2g\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.162160 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pjp2g\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.163263 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pjp2g\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.194720 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd7hh\" (UniqueName: \"kubernetes.io/projected/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-kube-api-access-bd7hh\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pjp2g\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.284873 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:17 crc kubenswrapper[4792]: I1127 18:00:17.955214 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g"] Nov 27 18:00:18 crc kubenswrapper[4792]: I1127 18:00:18.864288 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" event={"ID":"caa13c46-5c39-46bb-a2bb-cfa46caae2b4","Type":"ContainerStarted","Data":"8bf506388a30cc72c573c93ded52472de29442fa42373b2d40b502ad944c71f2"} Nov 27 18:00:18 crc kubenswrapper[4792]: I1127 18:00:18.864764 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" event={"ID":"caa13c46-5c39-46bb-a2bb-cfa46caae2b4","Type":"ContainerStarted","Data":"10e685866a87424c32ac4c7f694e58eddaa9a70edf6a410dcdfa0f9de1125ad9"} Nov 27 18:00:18 crc kubenswrapper[4792]: I1127 18:00:18.889768 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" podStartSLOduration=2.469449884 podStartE2EDuration="2.889729335s" podCreationTimestamp="2025-11-27 18:00:16 +0000 UTC" firstStartedPulling="2025-11-27 18:00:17.965781839 +0000 UTC m=+3040.308608157" lastFinishedPulling="2025-11-27 18:00:18.38606129 +0000 UTC m=+3040.728887608" observedRunningTime="2025-11-27 18:00:18.881336216 +0000 UTC m=+3041.224162534" watchObservedRunningTime="2025-11-27 18:00:18.889729335 +0000 UTC m=+3041.232555653" Nov 27 18:00:26 crc kubenswrapper[4792]: I1127 18:00:26.686808 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:00:26 crc kubenswrapper[4792]: E1127 18:00:26.687499 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:00:35 crc kubenswrapper[4792]: I1127 18:00:35.037045 4792 generic.go:334] "Generic (PLEG): container finished" podID="caa13c46-5c39-46bb-a2bb-cfa46caae2b4" containerID="8bf506388a30cc72c573c93ded52472de29442fa42373b2d40b502ad944c71f2" exitCode=0 Nov 27 18:00:35 crc kubenswrapper[4792]: I1127 18:00:35.037541 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" event={"ID":"caa13c46-5c39-46bb-a2bb-cfa46caae2b4","Type":"ContainerDied","Data":"8bf506388a30cc72c573c93ded52472de29442fa42373b2d40b502ad944c71f2"} Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.575879 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.692861 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd7hh\" (UniqueName: \"kubernetes.io/projected/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-kube-api-access-bd7hh\") pod \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.706509 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-inventory\") pod \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.706612 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-ssh-key\") pod \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.706739 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-logging-compute-config-data-1\") pod \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.706839 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-logging-compute-config-data-0\") pod \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\" (UID: \"caa13c46-5c39-46bb-a2bb-cfa46caae2b4\") " Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.767949 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-kube-api-access-bd7hh" (OuterVolumeSpecName: "kube-api-access-bd7hh") pod "caa13c46-5c39-46bb-a2bb-cfa46caae2b4" (UID: "caa13c46-5c39-46bb-a2bb-cfa46caae2b4"). InnerVolumeSpecName "kube-api-access-bd7hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.810370 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd7hh\" (UniqueName: \"kubernetes.io/projected/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-kube-api-access-bd7hh\") on node \"crc\" DevicePath \"\"" Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.841742 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "caa13c46-5c39-46bb-a2bb-cfa46caae2b4" (UID: "caa13c46-5c39-46bb-a2bb-cfa46caae2b4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.862606 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-inventory" (OuterVolumeSpecName: "inventory") pod "caa13c46-5c39-46bb-a2bb-cfa46caae2b4" (UID: "caa13c46-5c39-46bb-a2bb-cfa46caae2b4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.867867 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "caa13c46-5c39-46bb-a2bb-cfa46caae2b4" (UID: "caa13c46-5c39-46bb-a2bb-cfa46caae2b4"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.898983 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "caa13c46-5c39-46bb-a2bb-cfa46caae2b4" (UID: "caa13c46-5c39-46bb-a2bb-cfa46caae2b4"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.912704 4792 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.912741 4792 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-inventory\") on node \"crc\" DevicePath \"\"" Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.912753 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 18:00:36 crc kubenswrapper[4792]: I1127 18:00:36.912761 4792 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/caa13c46-5c39-46bb-a2bb-cfa46caae2b4-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 27 18:00:37 crc kubenswrapper[4792]: I1127 18:00:37.060217 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" event={"ID":"caa13c46-5c39-46bb-a2bb-cfa46caae2b4","Type":"ContainerDied","Data":"10e685866a87424c32ac4c7f694e58eddaa9a70edf6a410dcdfa0f9de1125ad9"} Nov 27 18:00:37 crc kubenswrapper[4792]: I1127 18:00:37.060281 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10e685866a87424c32ac4c7f694e58eddaa9a70edf6a410dcdfa0f9de1125ad9" Nov 27 18:00:37 crc kubenswrapper[4792]: I1127 18:00:37.060290 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pjp2g" Nov 27 18:00:37 crc kubenswrapper[4792]: I1127 18:00:37.687551 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:00:37 crc kubenswrapper[4792]: E1127 18:00:37.688147 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:00:42 crc kubenswrapper[4792]: E1127 18:00:42.497354 4792 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.214:39618->38.102.83.214:33271: write tcp 38.102.83.214:39618->38.102.83.214:33271: write: broken pipe Nov 27 18:00:50 crc kubenswrapper[4792]: I1127 18:00:50.687560 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:00:50 crc kubenswrapper[4792]: E1127 18:00:50.688323 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.165997 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29404441-l4szq"] Nov 27 18:01:00 crc kubenswrapper[4792]: E1127 18:01:00.167344 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caa13c46-5c39-46bb-a2bb-cfa46caae2b4" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.167367 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="caa13c46-5c39-46bb-a2bb-cfa46caae2b4" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.167799 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa13c46-5c39-46bb-a2bb-cfa46caae2b4" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.169196 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.231728 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29404441-l4szq"] Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.276738 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-combined-ca-bundle\") pod \"keystone-cron-29404441-l4szq\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.276835 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kkwp\" (UniqueName: \"kubernetes.io/projected/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-kube-api-access-8kkwp\") pod \"keystone-cron-29404441-l4szq\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.276937 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-fernet-keys\") pod \"keystone-cron-29404441-l4szq\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.276964 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-config-data\") pod \"keystone-cron-29404441-l4szq\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.378973 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-combined-ca-bundle\") pod \"keystone-cron-29404441-l4szq\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.379088 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kkwp\" (UniqueName: \"kubernetes.io/projected/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-kube-api-access-8kkwp\") pod \"keystone-cron-29404441-l4szq\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.379194 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-fernet-keys\") pod \"keystone-cron-29404441-l4szq\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.379234 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-config-data\") pod \"keystone-cron-29404441-l4szq\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.385715 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-config-data\") pod \"keystone-cron-29404441-l4szq\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.388483 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-combined-ca-bundle\") pod \"keystone-cron-29404441-l4szq\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.393593 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-fernet-keys\") pod \"keystone-cron-29404441-l4szq\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.395692 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kkwp\" (UniqueName: \"kubernetes.io/projected/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-kube-api-access-8kkwp\") pod \"keystone-cron-29404441-l4szq\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:00 crc kubenswrapper[4792]: I1127 18:01:00.493442 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:01 crc kubenswrapper[4792]: I1127 18:01:01.044728 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29404441-l4szq"] Nov 27 18:01:01 crc kubenswrapper[4792]: I1127 18:01:01.314595 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29404441-l4szq" event={"ID":"eea31a88-b7d7-4537-bd17-1a9edcaee2d9","Type":"ContainerStarted","Data":"85109f1e83f7812a1951ed9827f9e7ccf670a21e08f29f8c746c4bb9331e9627"} Nov 27 18:01:01 crc kubenswrapper[4792]: I1127 18:01:01.315068 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29404441-l4szq" event={"ID":"eea31a88-b7d7-4537-bd17-1a9edcaee2d9","Type":"ContainerStarted","Data":"725d2a16934ddeb129a9b2c88cbc4dd8f3df8748542d08ef8eccf90ac5ba23a4"} Nov 27 18:01:01 crc kubenswrapper[4792]: I1127 18:01:01.333810 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29404441-l4szq" podStartSLOduration=1.3337879369999999 podStartE2EDuration="1.333787937s" podCreationTimestamp="2025-11-27 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 18:01:01.32911427 +0000 UTC m=+3083.671940588" watchObservedRunningTime="2025-11-27 18:01:01.333787937 +0000 UTC m=+3083.676614255" Nov 27 18:01:05 crc kubenswrapper[4792]: I1127 18:01:05.687229 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:01:05 crc kubenswrapper[4792]: E1127 18:01:05.687940 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:01:07 crc kubenswrapper[4792]: I1127 18:01:07.378251 4792 generic.go:334] "Generic (PLEG): container finished" podID="eea31a88-b7d7-4537-bd17-1a9edcaee2d9" containerID="85109f1e83f7812a1951ed9827f9e7ccf670a21e08f29f8c746c4bb9331e9627" exitCode=0 Nov 27 18:01:07 crc kubenswrapper[4792]: I1127 18:01:07.378341 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29404441-l4szq" event={"ID":"eea31a88-b7d7-4537-bd17-1a9edcaee2d9","Type":"ContainerDied","Data":"85109f1e83f7812a1951ed9827f9e7ccf670a21e08f29f8c746c4bb9331e9627"} Nov 27 18:01:08 crc kubenswrapper[4792]: I1127 18:01:08.785002 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:08 crc kubenswrapper[4792]: I1127 18:01:08.830975 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-fernet-keys\") pod \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " Nov 27 18:01:08 crc kubenswrapper[4792]: I1127 18:01:08.831033 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-config-data\") pod \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " Nov 27 18:01:08 crc kubenswrapper[4792]: I1127 18:01:08.831924 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kkwp\" (UniqueName: \"kubernetes.io/projected/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-kube-api-access-8kkwp\") pod \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " Nov 27 18:01:08 crc kubenswrapper[4792]: I1127 18:01:08.832032 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-combined-ca-bundle\") pod \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\" (UID: \"eea31a88-b7d7-4537-bd17-1a9edcaee2d9\") " Nov 27 18:01:08 crc kubenswrapper[4792]: I1127 18:01:08.837023 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-kube-api-access-8kkwp" (OuterVolumeSpecName: "kube-api-access-8kkwp") pod "eea31a88-b7d7-4537-bd17-1a9edcaee2d9" (UID: "eea31a88-b7d7-4537-bd17-1a9edcaee2d9"). InnerVolumeSpecName "kube-api-access-8kkwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:01:08 crc kubenswrapper[4792]: I1127 18:01:08.839941 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "eea31a88-b7d7-4537-bd17-1a9edcaee2d9" (UID: "eea31a88-b7d7-4537-bd17-1a9edcaee2d9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:01:08 crc kubenswrapper[4792]: I1127 18:01:08.871005 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eea31a88-b7d7-4537-bd17-1a9edcaee2d9" (UID: "eea31a88-b7d7-4537-bd17-1a9edcaee2d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:01:08 crc kubenswrapper[4792]: I1127 18:01:08.891897 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-config-data" (OuterVolumeSpecName: "config-data") pod "eea31a88-b7d7-4537-bd17-1a9edcaee2d9" (UID: "eea31a88-b7d7-4537-bd17-1a9edcaee2d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:01:08 crc kubenswrapper[4792]: I1127 18:01:08.935463 4792 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 27 18:01:08 crc kubenswrapper[4792]: I1127 18:01:08.935491 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 18:01:08 crc kubenswrapper[4792]: I1127 18:01:08.935502 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kkwp\" (UniqueName: \"kubernetes.io/projected/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-kube-api-access-8kkwp\") on node \"crc\" DevicePath \"\"" Nov 27 18:01:08 crc kubenswrapper[4792]: I1127 18:01:08.935510 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea31a88-b7d7-4537-bd17-1a9edcaee2d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 18:01:09 crc kubenswrapper[4792]: I1127 18:01:09.401668 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29404441-l4szq" event={"ID":"eea31a88-b7d7-4537-bd17-1a9edcaee2d9","Type":"ContainerDied","Data":"725d2a16934ddeb129a9b2c88cbc4dd8f3df8748542d08ef8eccf90ac5ba23a4"} Nov 27 18:01:09 crc kubenswrapper[4792]: I1127 18:01:09.401702 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="725d2a16934ddeb129a9b2c88cbc4dd8f3df8748542d08ef8eccf90ac5ba23a4" Nov 27 18:01:09 crc kubenswrapper[4792]: I1127 18:01:09.401732 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29404441-l4szq" Nov 27 18:01:17 crc kubenswrapper[4792]: E1127 18:01:17.362104 4792 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.214:46200->38.102.83.214:33271: write tcp 38.102.83.214:46200->38.102.83.214:33271: write: broken pipe Nov 27 18:01:17 crc kubenswrapper[4792]: I1127 18:01:17.691276 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:01:17 crc kubenswrapper[4792]: E1127 18:01:17.691799 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:01:30 crc kubenswrapper[4792]: I1127 18:01:30.688412 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:01:30 crc kubenswrapper[4792]: E1127 18:01:30.689587 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:01:42 crc kubenswrapper[4792]: I1127 18:01:42.687576 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:01:42 crc kubenswrapper[4792]: E1127 18:01:42.688489 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:01:55 crc kubenswrapper[4792]: I1127 18:01:55.686865 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:01:55 crc kubenswrapper[4792]: E1127 18:01:55.687888 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:02:00 crc kubenswrapper[4792]: E1127 18:02:00.334546 4792 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.214:38818->38.102.83.214:33271: write tcp 38.102.83.214:38818->38.102.83.214:33271: write: broken pipe Nov 27 18:02:07 crc kubenswrapper[4792]: I1127 18:02:07.687255 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:02:07 crc kubenswrapper[4792]: E1127 18:02:07.688083 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:02:20 crc kubenswrapper[4792]: I1127 18:02:20.687344 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:02:20 crc kubenswrapper[4792]: E1127 18:02:20.688184 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:02:32 crc kubenswrapper[4792]: I1127 18:02:32.687282 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:02:32 crc kubenswrapper[4792]: E1127 18:02:32.688406 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:02:43 crc kubenswrapper[4792]: I1127 18:02:43.687232 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:02:43 crc kubenswrapper[4792]: E1127 18:02:43.688473 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:02:57 crc kubenswrapper[4792]: I1127 18:02:57.687185 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:02:57 crc kubenswrapper[4792]: E1127 18:02:57.688155 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:03:12 crc kubenswrapper[4792]: I1127 18:03:12.688115 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:03:14 crc kubenswrapper[4792]: I1127 18:03:13.999156 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"1cc2c071d8c4b3c3b316fd6fc71964bdc58502394551a6a0851392440fb5aff4"} Nov 27 18:03:29 crc kubenswrapper[4792]: I1127 18:03:29.676621 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6dsmt"] Nov 27 18:03:29 crc kubenswrapper[4792]: E1127 18:03:29.677809 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea31a88-b7d7-4537-bd17-1a9edcaee2d9" containerName="keystone-cron" Nov 27 18:03:29 crc kubenswrapper[4792]: I1127 18:03:29.677826 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea31a88-b7d7-4537-bd17-1a9edcaee2d9" containerName="keystone-cron" Nov 27 18:03:29 crc kubenswrapper[4792]: I1127 18:03:29.678130 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea31a88-b7d7-4537-bd17-1a9edcaee2d9" containerName="keystone-cron" Nov 27 18:03:29 crc kubenswrapper[4792]: I1127 18:03:29.679963 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:29 crc kubenswrapper[4792]: I1127 18:03:29.689591 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dsmt"] Nov 27 18:03:29 crc kubenswrapper[4792]: I1127 18:03:29.742614 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pd7w\" (UniqueName: \"kubernetes.io/projected/1a59356f-aaa7-42c5-aab4-032636f1c269-kube-api-access-9pd7w\") pod \"community-operators-6dsmt\" (UID: \"1a59356f-aaa7-42c5-aab4-032636f1c269\") " pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:29 crc kubenswrapper[4792]: I1127 18:03:29.742691 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a59356f-aaa7-42c5-aab4-032636f1c269-utilities\") pod \"community-operators-6dsmt\" (UID: \"1a59356f-aaa7-42c5-aab4-032636f1c269\") " pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:29 crc kubenswrapper[4792]: I1127 18:03:29.742774 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a59356f-aaa7-42c5-aab4-032636f1c269-catalog-content\") pod \"community-operators-6dsmt\" (UID: \"1a59356f-aaa7-42c5-aab4-032636f1c269\") " pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:29 crc kubenswrapper[4792]: I1127 18:03:29.844595 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a59356f-aaa7-42c5-aab4-032636f1c269-catalog-content\") pod \"community-operators-6dsmt\" (UID: \"1a59356f-aaa7-42c5-aab4-032636f1c269\") " pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:29 crc kubenswrapper[4792]: I1127 18:03:29.844844 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pd7w\" (UniqueName: \"kubernetes.io/projected/1a59356f-aaa7-42c5-aab4-032636f1c269-kube-api-access-9pd7w\") pod \"community-operators-6dsmt\" (UID: \"1a59356f-aaa7-42c5-aab4-032636f1c269\") " pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:29 crc kubenswrapper[4792]: I1127 18:03:29.844889 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a59356f-aaa7-42c5-aab4-032636f1c269-utilities\") pod \"community-operators-6dsmt\" (UID: \"1a59356f-aaa7-42c5-aab4-032636f1c269\") " pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:29 crc kubenswrapper[4792]: I1127 18:03:29.845220 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a59356f-aaa7-42c5-aab4-032636f1c269-catalog-content\") pod \"community-operators-6dsmt\" (UID: \"1a59356f-aaa7-42c5-aab4-032636f1c269\") " pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:29 crc kubenswrapper[4792]: I1127 18:03:29.845377 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a59356f-aaa7-42c5-aab4-032636f1c269-utilities\") pod \"community-operators-6dsmt\" (UID: \"1a59356f-aaa7-42c5-aab4-032636f1c269\") " pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:29 crc kubenswrapper[4792]: I1127 18:03:29.879946 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pd7w\" (UniqueName: \"kubernetes.io/projected/1a59356f-aaa7-42c5-aab4-032636f1c269-kube-api-access-9pd7w\") pod \"community-operators-6dsmt\" (UID: \"1a59356f-aaa7-42c5-aab4-032636f1c269\") " pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:30 crc kubenswrapper[4792]: I1127 18:03:30.000720 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:30 crc kubenswrapper[4792]: I1127 18:03:30.642959 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6dsmt"] Nov 27 18:03:31 crc kubenswrapper[4792]: I1127 18:03:31.225928 4792 generic.go:334] "Generic (PLEG): container finished" podID="1a59356f-aaa7-42c5-aab4-032636f1c269" containerID="f21e5bb35709cb0c148b2b29c9972585e8ebe76c1c7c69384f03ad774b5204e0" exitCode=0 Nov 27 18:03:31 crc kubenswrapper[4792]: I1127 18:03:31.225964 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dsmt" event={"ID":"1a59356f-aaa7-42c5-aab4-032636f1c269","Type":"ContainerDied","Data":"f21e5bb35709cb0c148b2b29c9972585e8ebe76c1c7c69384f03ad774b5204e0"} Nov 27 18:03:31 crc kubenswrapper[4792]: I1127 18:03:31.226222 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dsmt" event={"ID":"1a59356f-aaa7-42c5-aab4-032636f1c269","Type":"ContainerStarted","Data":"9b13d02ea489e2129fafaef08e433339bb0317f88ed4e99015aeb006488003a2"} Nov 27 18:03:31 crc kubenswrapper[4792]: I1127 18:03:31.229430 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 18:03:33 crc kubenswrapper[4792]: I1127 18:03:33.254733 4792 generic.go:334] "Generic (PLEG): container finished" podID="1a59356f-aaa7-42c5-aab4-032636f1c269" containerID="eccfa570f640d833886d91a22beb189d633bf65170ecc71f4232ee465b35779d" exitCode=0 Nov 27 18:03:33 crc kubenswrapper[4792]: I1127 18:03:33.254803 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dsmt" event={"ID":"1a59356f-aaa7-42c5-aab4-032636f1c269","Type":"ContainerDied","Data":"eccfa570f640d833886d91a22beb189d633bf65170ecc71f4232ee465b35779d"} Nov 27 18:03:34 crc kubenswrapper[4792]: I1127 18:03:34.270715 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dsmt" event={"ID":"1a59356f-aaa7-42c5-aab4-032636f1c269","Type":"ContainerStarted","Data":"24be48a38a1b092e4f94fdc16d8f363c8ae5cb800bb884ac1c923914e6dd209b"} Nov 27 18:03:34 crc kubenswrapper[4792]: I1127 18:03:34.296147 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6dsmt" podStartSLOduration=2.725149807 podStartE2EDuration="5.296132469s" podCreationTimestamp="2025-11-27 18:03:29 +0000 UTC" firstStartedPulling="2025-11-27 18:03:31.229183854 +0000 UTC m=+3233.572010172" lastFinishedPulling="2025-11-27 18:03:33.800166526 +0000 UTC m=+3236.142992834" observedRunningTime="2025-11-27 18:03:34.29213276 +0000 UTC m=+3236.634959088" watchObservedRunningTime="2025-11-27 18:03:34.296132469 +0000 UTC m=+3236.638958777" Nov 27 18:03:40 crc kubenswrapper[4792]: I1127 18:03:40.000965 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:40 crc kubenswrapper[4792]: I1127 18:03:40.001671 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:40 crc kubenswrapper[4792]: I1127 18:03:40.085339 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:40 crc kubenswrapper[4792]: I1127 18:03:40.395939 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:40 crc kubenswrapper[4792]: I1127 18:03:40.455737 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6dsmt"] Nov 27 18:03:42 crc kubenswrapper[4792]: I1127 18:03:42.355013 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6dsmt" podUID="1a59356f-aaa7-42c5-aab4-032636f1c269" containerName="registry-server" containerID="cri-o://24be48a38a1b092e4f94fdc16d8f363c8ae5cb800bb884ac1c923914e6dd209b" gracePeriod=2 Nov 27 18:03:42 crc kubenswrapper[4792]: I1127 18:03:42.914696 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:42 crc kubenswrapper[4792]: I1127 18:03:42.989672 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a59356f-aaa7-42c5-aab4-032636f1c269-catalog-content\") pod \"1a59356f-aaa7-42c5-aab4-032636f1c269\" (UID: \"1a59356f-aaa7-42c5-aab4-032636f1c269\") " Nov 27 18:03:42 crc kubenswrapper[4792]: I1127 18:03:42.989875 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pd7w\" (UniqueName: \"kubernetes.io/projected/1a59356f-aaa7-42c5-aab4-032636f1c269-kube-api-access-9pd7w\") pod \"1a59356f-aaa7-42c5-aab4-032636f1c269\" (UID: \"1a59356f-aaa7-42c5-aab4-032636f1c269\") " Nov 27 18:03:42 crc kubenswrapper[4792]: I1127 18:03:42.989992 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a59356f-aaa7-42c5-aab4-032636f1c269-utilities\") pod \"1a59356f-aaa7-42c5-aab4-032636f1c269\" (UID: \"1a59356f-aaa7-42c5-aab4-032636f1c269\") " Nov 27 18:03:42 crc kubenswrapper[4792]: I1127 18:03:42.990806 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a59356f-aaa7-42c5-aab4-032636f1c269-utilities" (OuterVolumeSpecName: "utilities") pod "1a59356f-aaa7-42c5-aab4-032636f1c269" (UID: "1a59356f-aaa7-42c5-aab4-032636f1c269"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:03:42 crc kubenswrapper[4792]: I1127 18:03:42.998990 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a59356f-aaa7-42c5-aab4-032636f1c269-kube-api-access-9pd7w" (OuterVolumeSpecName: "kube-api-access-9pd7w") pod "1a59356f-aaa7-42c5-aab4-032636f1c269" (UID: "1a59356f-aaa7-42c5-aab4-032636f1c269"). InnerVolumeSpecName "kube-api-access-9pd7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.042332 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a59356f-aaa7-42c5-aab4-032636f1c269-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a59356f-aaa7-42c5-aab4-032636f1c269" (UID: "1a59356f-aaa7-42c5-aab4-032636f1c269"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.092718 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pd7w\" (UniqueName: \"kubernetes.io/projected/1a59356f-aaa7-42c5-aab4-032636f1c269-kube-api-access-9pd7w\") on node \"crc\" DevicePath \"\"" Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.092760 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a59356f-aaa7-42c5-aab4-032636f1c269-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.092770 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a59356f-aaa7-42c5-aab4-032636f1c269-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.366800 4792 generic.go:334] "Generic (PLEG): container finished" podID="1a59356f-aaa7-42c5-aab4-032636f1c269" containerID="24be48a38a1b092e4f94fdc16d8f363c8ae5cb800bb884ac1c923914e6dd209b" exitCode=0 Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.366858 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dsmt" event={"ID":"1a59356f-aaa7-42c5-aab4-032636f1c269","Type":"ContainerDied","Data":"24be48a38a1b092e4f94fdc16d8f363c8ae5cb800bb884ac1c923914e6dd209b"} Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.366897 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6dsmt" event={"ID":"1a59356f-aaa7-42c5-aab4-032636f1c269","Type":"ContainerDied","Data":"9b13d02ea489e2129fafaef08e433339bb0317f88ed4e99015aeb006488003a2"} Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.366892 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6dsmt" Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.366921 4792 scope.go:117] "RemoveContainer" containerID="24be48a38a1b092e4f94fdc16d8f363c8ae5cb800bb884ac1c923914e6dd209b" Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.400899 4792 scope.go:117] "RemoveContainer" containerID="eccfa570f640d833886d91a22beb189d633bf65170ecc71f4232ee465b35779d" Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.403987 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6dsmt"] Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.415545 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6dsmt"] Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.425872 4792 scope.go:117] "RemoveContainer" containerID="f21e5bb35709cb0c148b2b29c9972585e8ebe76c1c7c69384f03ad774b5204e0" Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.479071 4792 scope.go:117] "RemoveContainer" containerID="24be48a38a1b092e4f94fdc16d8f363c8ae5cb800bb884ac1c923914e6dd209b" Nov 27 18:03:43 crc kubenswrapper[4792]: E1127 18:03:43.479531 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24be48a38a1b092e4f94fdc16d8f363c8ae5cb800bb884ac1c923914e6dd209b\": container with ID starting with 24be48a38a1b092e4f94fdc16d8f363c8ae5cb800bb884ac1c923914e6dd209b not found: ID does not exist" containerID="24be48a38a1b092e4f94fdc16d8f363c8ae5cb800bb884ac1c923914e6dd209b" Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.479564 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24be48a38a1b092e4f94fdc16d8f363c8ae5cb800bb884ac1c923914e6dd209b"} err="failed to get container status \"24be48a38a1b092e4f94fdc16d8f363c8ae5cb800bb884ac1c923914e6dd209b\": rpc error: code = NotFound desc = could not find container \"24be48a38a1b092e4f94fdc16d8f363c8ae5cb800bb884ac1c923914e6dd209b\": container with ID starting with 24be48a38a1b092e4f94fdc16d8f363c8ae5cb800bb884ac1c923914e6dd209b not found: ID does not exist" Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.479583 4792 scope.go:117] "RemoveContainer" containerID="eccfa570f640d833886d91a22beb189d633bf65170ecc71f4232ee465b35779d" Nov 27 18:03:43 crc kubenswrapper[4792]: E1127 18:03:43.479800 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eccfa570f640d833886d91a22beb189d633bf65170ecc71f4232ee465b35779d\": container with ID starting with eccfa570f640d833886d91a22beb189d633bf65170ecc71f4232ee465b35779d not found: ID does not exist" containerID="eccfa570f640d833886d91a22beb189d633bf65170ecc71f4232ee465b35779d" Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.479838 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eccfa570f640d833886d91a22beb189d633bf65170ecc71f4232ee465b35779d"} err="failed to get container status \"eccfa570f640d833886d91a22beb189d633bf65170ecc71f4232ee465b35779d\": rpc error: code = NotFound desc = could not find container \"eccfa570f640d833886d91a22beb189d633bf65170ecc71f4232ee465b35779d\": container with ID starting with eccfa570f640d833886d91a22beb189d633bf65170ecc71f4232ee465b35779d not found: ID does not exist" Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.479857 4792 scope.go:117] "RemoveContainer" containerID="f21e5bb35709cb0c148b2b29c9972585e8ebe76c1c7c69384f03ad774b5204e0" Nov 27 18:03:43 crc kubenswrapper[4792]: E1127 18:03:43.480124 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f21e5bb35709cb0c148b2b29c9972585e8ebe76c1c7c69384f03ad774b5204e0\": container with ID starting with f21e5bb35709cb0c148b2b29c9972585e8ebe76c1c7c69384f03ad774b5204e0 not found: ID does not exist" containerID="f21e5bb35709cb0c148b2b29c9972585e8ebe76c1c7c69384f03ad774b5204e0" Nov 27 18:03:43 crc kubenswrapper[4792]: I1127 18:03:43.480159 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f21e5bb35709cb0c148b2b29c9972585e8ebe76c1c7c69384f03ad774b5204e0"} err="failed to get container status \"f21e5bb35709cb0c148b2b29c9972585e8ebe76c1c7c69384f03ad774b5204e0\": rpc error: code = NotFound desc = could not find container \"f21e5bb35709cb0c148b2b29c9972585e8ebe76c1c7c69384f03ad774b5204e0\": container with ID starting with f21e5bb35709cb0c148b2b29c9972585e8ebe76c1c7c69384f03ad774b5204e0 not found: ID does not exist" Nov 27 18:03:44 crc kubenswrapper[4792]: I1127 18:03:44.711189 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a59356f-aaa7-42c5-aab4-032636f1c269" path="/var/lib/kubelet/pods/1a59356f-aaa7-42c5-aab4-032636f1c269/volumes" Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.671425 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-95s2h"] Nov 27 18:05:15 crc kubenswrapper[4792]: E1127 18:05:15.672976 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a59356f-aaa7-42c5-aab4-032636f1c269" containerName="extract-content" Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.672993 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a59356f-aaa7-42c5-aab4-032636f1c269" containerName="extract-content" Nov 27 18:05:15 crc kubenswrapper[4792]: E1127 18:05:15.673008 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a59356f-aaa7-42c5-aab4-032636f1c269" containerName="registry-server" Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.673014 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a59356f-aaa7-42c5-aab4-032636f1c269" containerName="registry-server" Nov 27 18:05:15 crc kubenswrapper[4792]: E1127 18:05:15.673032 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a59356f-aaa7-42c5-aab4-032636f1c269" containerName="extract-utilities" Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.673039 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a59356f-aaa7-42c5-aab4-032636f1c269" containerName="extract-utilities" Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.673277 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a59356f-aaa7-42c5-aab4-032636f1c269" containerName="registry-server" Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.675251 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.707419 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-95s2h"] Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.805078 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mv8\" (UniqueName: \"kubernetes.io/projected/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-kube-api-access-w5mv8\") pod \"certified-operators-95s2h\" (UID: \"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8\") " pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.805301 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-utilities\") pod \"certified-operators-95s2h\" (UID: \"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8\") " pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.805836 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-catalog-content\") pod \"certified-operators-95s2h\" (UID: \"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8\") " pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.908039 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-catalog-content\") pod \"certified-operators-95s2h\" (UID: \"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8\") " pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.908240 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5mv8\" (UniqueName: \"kubernetes.io/projected/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-kube-api-access-w5mv8\") pod \"certified-operators-95s2h\" (UID: \"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8\") " pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.908295 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-utilities\") pod \"certified-operators-95s2h\" (UID: \"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8\") " pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.908595 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-catalog-content\") pod \"certified-operators-95s2h\" (UID: \"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8\") " pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.908867 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-utilities\") pod \"certified-operators-95s2h\" (UID: \"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8\") " pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:15 crc kubenswrapper[4792]: I1127 18:05:15.936880 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5mv8\" (UniqueName: \"kubernetes.io/projected/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-kube-api-access-w5mv8\") pod \"certified-operators-95s2h\" (UID: \"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8\") " pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:16 crc kubenswrapper[4792]: I1127 18:05:16.000917 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:16 crc kubenswrapper[4792]: I1127 18:05:16.560110 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-95s2h"] Nov 27 18:05:17 crc kubenswrapper[4792]: I1127 18:05:17.483127 4792 generic.go:334] "Generic (PLEG): container finished" podID="718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" containerID="90e43adfd12f722d82532fe10b102d796e2259b95625d565cc371e2ba51d88d1" exitCode=0 Nov 27 18:05:17 crc kubenswrapper[4792]: I1127 18:05:17.483189 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95s2h" event={"ID":"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8","Type":"ContainerDied","Data":"90e43adfd12f722d82532fe10b102d796e2259b95625d565cc371e2ba51d88d1"} Nov 27 18:05:17 crc kubenswrapper[4792]: I1127 18:05:17.483475 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95s2h" event={"ID":"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8","Type":"ContainerStarted","Data":"17532d0376659de363374ecb56bfd40fb038093c9a37fb323e9ed5252ccac35b"} Nov 27 18:05:18 crc kubenswrapper[4792]: I1127 18:05:18.495253 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95s2h" event={"ID":"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8","Type":"ContainerStarted","Data":"c2b46344b1fb39f3c068b406f5db3146d7950169e17cf41ad43d447abfe4a9e1"} Nov 27 18:05:20 crc kubenswrapper[4792]: I1127 18:05:20.523507 4792 generic.go:334] "Generic (PLEG): container finished" podID="718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" containerID="c2b46344b1fb39f3c068b406f5db3146d7950169e17cf41ad43d447abfe4a9e1" exitCode=0 Nov 27 18:05:20 crc kubenswrapper[4792]: I1127 18:05:20.523901 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95s2h" event={"ID":"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8","Type":"ContainerDied","Data":"c2b46344b1fb39f3c068b406f5db3146d7950169e17cf41ad43d447abfe4a9e1"} Nov 27 18:05:21 crc kubenswrapper[4792]: I1127 18:05:21.537410 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95s2h" event={"ID":"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8","Type":"ContainerStarted","Data":"498b1281301803f7384c3dfe030ac08641361da79a1313e8c6316a08a6f5fa28"} Nov 27 18:05:21 crc kubenswrapper[4792]: I1127 18:05:21.570132 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-95s2h" podStartSLOduration=3.003172711 podStartE2EDuration="6.570111388s" podCreationTimestamp="2025-11-27 18:05:15 +0000 UTC" firstStartedPulling="2025-11-27 18:05:17.485657682 +0000 UTC m=+3339.828484000" lastFinishedPulling="2025-11-27 18:05:21.052596359 +0000 UTC m=+3343.395422677" observedRunningTime="2025-11-27 18:05:21.564062517 +0000 UTC m=+3343.906888855" watchObservedRunningTime="2025-11-27 18:05:21.570111388 +0000 UTC m=+3343.912937706" Nov 27 18:05:26 crc kubenswrapper[4792]: I1127 18:05:26.001620 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:26 crc kubenswrapper[4792]: I1127 18:05:26.002385 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:27 crc kubenswrapper[4792]: I1127 18:05:27.065915 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-95s2h" podUID="718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" containerName="registry-server" probeResult="failure" output=< Nov 27 18:05:27 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:05:27 crc kubenswrapper[4792]: > Nov 27 18:05:36 crc kubenswrapper[4792]: I1127 18:05:36.054998 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:36 crc kubenswrapper[4792]: I1127 18:05:36.114391 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:36 crc kubenswrapper[4792]: I1127 18:05:36.293491 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-95s2h"] Nov 27 18:05:37 crc kubenswrapper[4792]: I1127 18:05:37.696232 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-95s2h" podUID="718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" containerName="registry-server" containerID="cri-o://498b1281301803f7384c3dfe030ac08641361da79a1313e8c6316a08a6f5fa28" gracePeriod=2 Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.219522 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.289905 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.289956 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.354364 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5mv8\" (UniqueName: \"kubernetes.io/projected/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-kube-api-access-w5mv8\") pod \"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8\" (UID: \"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8\") " Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.354578 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-catalog-content\") pod \"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8\" (UID: \"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8\") " Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.354738 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-utilities\") pod \"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8\" (UID: \"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8\") " Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.357772 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-utilities" (OuterVolumeSpecName: "utilities") pod "718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" (UID: "718ce1a4-8eac-48f5-95f3-0ff84ea4bee8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.373872 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-kube-api-access-w5mv8" (OuterVolumeSpecName: "kube-api-access-w5mv8") pod "718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" (UID: "718ce1a4-8eac-48f5-95f3-0ff84ea4bee8"). InnerVolumeSpecName "kube-api-access-w5mv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.419102 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" (UID: "718ce1a4-8eac-48f5-95f3-0ff84ea4bee8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.457502 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.457546 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.457556 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5mv8\" (UniqueName: \"kubernetes.io/projected/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8-kube-api-access-w5mv8\") on node \"crc\" DevicePath \"\"" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.707918 4792 generic.go:334] "Generic (PLEG): container finished" podID="718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" containerID="498b1281301803f7384c3dfe030ac08641361da79a1313e8c6316a08a6f5fa28" exitCode=0 Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.707970 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95s2h" event={"ID":"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8","Type":"ContainerDied","Data":"498b1281301803f7384c3dfe030ac08641361da79a1313e8c6316a08a6f5fa28"} Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.708008 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-95s2h" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.708031 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-95s2h" event={"ID":"718ce1a4-8eac-48f5-95f3-0ff84ea4bee8","Type":"ContainerDied","Data":"17532d0376659de363374ecb56bfd40fb038093c9a37fb323e9ed5252ccac35b"} Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.708050 4792 scope.go:117] "RemoveContainer" containerID="498b1281301803f7384c3dfe030ac08641361da79a1313e8c6316a08a6f5fa28" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.734014 4792 scope.go:117] "RemoveContainer" containerID="c2b46344b1fb39f3c068b406f5db3146d7950169e17cf41ad43d447abfe4a9e1" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.758077 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-95s2h"] Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.771544 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-95s2h"] Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.784261 4792 scope.go:117] "RemoveContainer" containerID="90e43adfd12f722d82532fe10b102d796e2259b95625d565cc371e2ba51d88d1" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.811925 4792 scope.go:117] "RemoveContainer" containerID="498b1281301803f7384c3dfe030ac08641361da79a1313e8c6316a08a6f5fa28" Nov 27 18:05:38 crc kubenswrapper[4792]: E1127 18:05:38.812334 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"498b1281301803f7384c3dfe030ac08641361da79a1313e8c6316a08a6f5fa28\": container with ID starting with 498b1281301803f7384c3dfe030ac08641361da79a1313e8c6316a08a6f5fa28 not found: ID does not exist" containerID="498b1281301803f7384c3dfe030ac08641361da79a1313e8c6316a08a6f5fa28" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.812365 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498b1281301803f7384c3dfe030ac08641361da79a1313e8c6316a08a6f5fa28"} err="failed to get container status \"498b1281301803f7384c3dfe030ac08641361da79a1313e8c6316a08a6f5fa28\": rpc error: code = NotFound desc = could not find container \"498b1281301803f7384c3dfe030ac08641361da79a1313e8c6316a08a6f5fa28\": container with ID starting with 498b1281301803f7384c3dfe030ac08641361da79a1313e8c6316a08a6f5fa28 not found: ID does not exist" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.812386 4792 scope.go:117] "RemoveContainer" containerID="c2b46344b1fb39f3c068b406f5db3146d7950169e17cf41ad43d447abfe4a9e1" Nov 27 18:05:38 crc kubenswrapper[4792]: E1127 18:05:38.812713 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2b46344b1fb39f3c068b406f5db3146d7950169e17cf41ad43d447abfe4a9e1\": container with ID starting with c2b46344b1fb39f3c068b406f5db3146d7950169e17cf41ad43d447abfe4a9e1 not found: ID does not exist" containerID="c2b46344b1fb39f3c068b406f5db3146d7950169e17cf41ad43d447abfe4a9e1" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.812734 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2b46344b1fb39f3c068b406f5db3146d7950169e17cf41ad43d447abfe4a9e1"} err="failed to get container status \"c2b46344b1fb39f3c068b406f5db3146d7950169e17cf41ad43d447abfe4a9e1\": rpc error: code = NotFound desc = could not find container \"c2b46344b1fb39f3c068b406f5db3146d7950169e17cf41ad43d447abfe4a9e1\": container with ID starting with c2b46344b1fb39f3c068b406f5db3146d7950169e17cf41ad43d447abfe4a9e1 not found: ID does not exist" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.812746 4792 scope.go:117] "RemoveContainer" containerID="90e43adfd12f722d82532fe10b102d796e2259b95625d565cc371e2ba51d88d1" Nov 27 18:05:38 crc kubenswrapper[4792]: E1127 18:05:38.813226 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e43adfd12f722d82532fe10b102d796e2259b95625d565cc371e2ba51d88d1\": container with ID starting with 90e43adfd12f722d82532fe10b102d796e2259b95625d565cc371e2ba51d88d1 not found: ID does not exist" containerID="90e43adfd12f722d82532fe10b102d796e2259b95625d565cc371e2ba51d88d1" Nov 27 18:05:38 crc kubenswrapper[4792]: I1127 18:05:38.813259 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e43adfd12f722d82532fe10b102d796e2259b95625d565cc371e2ba51d88d1"} err="failed to get container status \"90e43adfd12f722d82532fe10b102d796e2259b95625d565cc371e2ba51d88d1\": rpc error: code = NotFound desc = could not find container \"90e43adfd12f722d82532fe10b102d796e2259b95625d565cc371e2ba51d88d1\": container with ID starting with 90e43adfd12f722d82532fe10b102d796e2259b95625d565cc371e2ba51d88d1 not found: ID does not exist" Nov 27 18:05:40 crc kubenswrapper[4792]: I1127 18:05:40.703977 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" path="/var/lib/kubelet/pods/718ce1a4-8eac-48f5-95f3-0ff84ea4bee8/volumes" Nov 27 18:05:47 crc kubenswrapper[4792]: E1127 18:05:47.271827 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice/crio-17532d0376659de363374ecb56bfd40fb038093c9a37fb323e9ed5252ccac35b\": RecentStats: unable to find data in memory cache]" Nov 27 18:05:48 crc kubenswrapper[4792]: E1127 18:05:48.104948 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice/crio-17532d0376659de363374ecb56bfd40fb038093c9a37fb323e9ed5252ccac35b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice\": RecentStats: unable to find data in memory cache]" Nov 27 18:05:48 crc kubenswrapper[4792]: E1127 18:05:48.105715 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice/crio-17532d0376659de363374ecb56bfd40fb038093c9a37fb323e9ed5252ccac35b\": RecentStats: unable to find data in memory cache]" Nov 27 18:05:52 crc kubenswrapper[4792]: E1127 18:05:52.533060 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice/crio-17532d0376659de363374ecb56bfd40fb038093c9a37fb323e9ed5252ccac35b\": RecentStats: unable to find data in memory cache]" Nov 27 18:05:57 crc kubenswrapper[4792]: E1127 18:05:57.389758 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice/crio-17532d0376659de363374ecb56bfd40fb038093c9a37fb323e9ed5252ccac35b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice\": RecentStats: unable to find data in memory cache]" Nov 27 18:06:07 crc kubenswrapper[4792]: E1127 18:06:07.274352 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice/crio-17532d0376659de363374ecb56bfd40fb038093c9a37fb323e9ed5252ccac35b\": RecentStats: unable to find data in memory cache]" Nov 27 18:06:07 crc kubenswrapper[4792]: E1127 18:06:07.448188 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice/crio-17532d0376659de363374ecb56bfd40fb038093c9a37fb323e9ed5252ccac35b\": RecentStats: unable to find data in memory cache]" Nov 27 18:06:08 crc kubenswrapper[4792]: I1127 18:06:08.290983 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:06:08 crc kubenswrapper[4792]: I1127 18:06:08.291571 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:06:17 crc kubenswrapper[4792]: E1127 18:06:17.758340 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice/crio-17532d0376659de363374ecb56bfd40fb038093c9a37fb323e9ed5252ccac35b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice\": RecentStats: unable to find data in memory cache]" Nov 27 18:06:22 crc kubenswrapper[4792]: E1127 18:06:22.275769 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice/crio-17532d0376659de363374ecb56bfd40fb038093c9a37fb323e9ed5252ccac35b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice\": RecentStats: unable to find data in memory cache]" Nov 27 18:06:28 crc kubenswrapper[4792]: E1127 18:06:28.109756 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice/crio-17532d0376659de363374ecb56bfd40fb038093c9a37fb323e9ed5252ccac35b\": RecentStats: unable to find data in memory cache]" Nov 27 18:06:37 crc kubenswrapper[4792]: E1127 18:06:37.526415 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice/crio-17532d0376659de363374ecb56bfd40fb038093c9a37fb323e9ed5252ccac35b\": RecentStats: unable to find data in memory cache]" Nov 27 18:06:38 crc kubenswrapper[4792]: E1127 18:06:38.162667 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice/crio-17532d0376659de363374ecb56bfd40fb038093c9a37fb323e9ed5252ccac35b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod718ce1a4_8eac_48f5_95f3_0ff84ea4bee8.slice\": RecentStats: unable to find data in memory cache]" Nov 27 18:06:38 crc kubenswrapper[4792]: I1127 18:06:38.290702 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:06:38 crc kubenswrapper[4792]: I1127 18:06:38.290757 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:06:38 crc kubenswrapper[4792]: I1127 18:06:38.290797 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 18:06:38 crc kubenswrapper[4792]: I1127 18:06:38.291368 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1cc2c071d8c4b3c3b316fd6fc71964bdc58502394551a6a0851392440fb5aff4"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 18:06:38 crc kubenswrapper[4792]: I1127 18:06:38.291423 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://1cc2c071d8c4b3c3b316fd6fc71964bdc58502394551a6a0851392440fb5aff4" gracePeriod=600 Nov 27 18:06:39 crc kubenswrapper[4792]: I1127 18:06:39.417075 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="1cc2c071d8c4b3c3b316fd6fc71964bdc58502394551a6a0851392440fb5aff4" exitCode=0 Nov 27 18:06:39 crc kubenswrapper[4792]: I1127 18:06:39.417519 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"1cc2c071d8c4b3c3b316fd6fc71964bdc58502394551a6a0851392440fb5aff4"} Nov 27 18:06:39 crc kubenswrapper[4792]: I1127 18:06:39.417954 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af"} Nov 27 18:06:39 crc kubenswrapper[4792]: I1127 18:06:39.417987 4792 scope.go:117] "RemoveContainer" containerID="f5af2eec8af19ab1368933f002cf030774b69fcff7637ee91801c255a1edf0ef" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.159371 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vg6c9"] Nov 27 18:08:20 crc kubenswrapper[4792]: E1127 18:08:20.160436 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" containerName="extract-utilities" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.160453 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" containerName="extract-utilities" Nov 27 18:08:20 crc kubenswrapper[4792]: E1127 18:08:20.160477 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" containerName="registry-server" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.160485 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" containerName="registry-server" Nov 27 18:08:20 crc kubenswrapper[4792]: E1127 18:08:20.160530 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" containerName="extract-content" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.160543 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" containerName="extract-content" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.160798 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="718ce1a4-8eac-48f5-95f3-0ff84ea4bee8" containerName="registry-server" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.162524 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.182946 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vg6c9"] Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.291243 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54m7q\" (UniqueName: \"kubernetes.io/projected/16d5cbcc-a900-4c6f-8f6d-75bb92558711-kube-api-access-54m7q\") pod \"redhat-operators-vg6c9\" (UID: \"16d5cbcc-a900-4c6f-8f6d-75bb92558711\") " pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.291296 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16d5cbcc-a900-4c6f-8f6d-75bb92558711-utilities\") pod \"redhat-operators-vg6c9\" (UID: \"16d5cbcc-a900-4c6f-8f6d-75bb92558711\") " pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.291412 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16d5cbcc-a900-4c6f-8f6d-75bb92558711-catalog-content\") pod \"redhat-operators-vg6c9\" (UID: \"16d5cbcc-a900-4c6f-8f6d-75bb92558711\") " pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.392960 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16d5cbcc-a900-4c6f-8f6d-75bb92558711-catalog-content\") pod \"redhat-operators-vg6c9\" (UID: \"16d5cbcc-a900-4c6f-8f6d-75bb92558711\") " pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.393119 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54m7q\" (UniqueName: \"kubernetes.io/projected/16d5cbcc-a900-4c6f-8f6d-75bb92558711-kube-api-access-54m7q\") pod \"redhat-operators-vg6c9\" (UID: \"16d5cbcc-a900-4c6f-8f6d-75bb92558711\") " pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.393146 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16d5cbcc-a900-4c6f-8f6d-75bb92558711-utilities\") pod \"redhat-operators-vg6c9\" (UID: \"16d5cbcc-a900-4c6f-8f6d-75bb92558711\") " pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.393693 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16d5cbcc-a900-4c6f-8f6d-75bb92558711-utilities\") pod \"redhat-operators-vg6c9\" (UID: \"16d5cbcc-a900-4c6f-8f6d-75bb92558711\") " pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.393721 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16d5cbcc-a900-4c6f-8f6d-75bb92558711-catalog-content\") pod \"redhat-operators-vg6c9\" (UID: \"16d5cbcc-a900-4c6f-8f6d-75bb92558711\") " pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.430444 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54m7q\" (UniqueName: \"kubernetes.io/projected/16d5cbcc-a900-4c6f-8f6d-75bb92558711-kube-api-access-54m7q\") pod \"redhat-operators-vg6c9\" (UID: \"16d5cbcc-a900-4c6f-8f6d-75bb92558711\") " pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:20 crc kubenswrapper[4792]: I1127 18:08:20.491204 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:21 crc kubenswrapper[4792]: I1127 18:08:21.086315 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vg6c9"] Nov 27 18:08:21 crc kubenswrapper[4792]: I1127 18:08:21.980048 4792 generic.go:334] "Generic (PLEG): container finished" podID="16d5cbcc-a900-4c6f-8f6d-75bb92558711" containerID="fdb30a5376944e2842db86648ed69aecf5e25b14d251c223886e5a22e1d9e89e" exitCode=0 Nov 27 18:08:21 crc kubenswrapper[4792]: I1127 18:08:21.980113 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg6c9" event={"ID":"16d5cbcc-a900-4c6f-8f6d-75bb92558711","Type":"ContainerDied","Data":"fdb30a5376944e2842db86648ed69aecf5e25b14d251c223886e5a22e1d9e89e"} Nov 27 18:08:21 crc kubenswrapper[4792]: I1127 18:08:21.980381 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg6c9" event={"ID":"16d5cbcc-a900-4c6f-8f6d-75bb92558711","Type":"ContainerStarted","Data":"5c06a75648c1a564a8483105f32f03c47182f596b24c7aa083006763300afc03"} Nov 27 18:08:24 crc kubenswrapper[4792]: I1127 18:08:24.003808 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg6c9" event={"ID":"16d5cbcc-a900-4c6f-8f6d-75bb92558711","Type":"ContainerStarted","Data":"87d85c02aa13ae8efe07710d1c4d38e4438f860b807a2b135249c03d0b62fc56"} Nov 27 18:08:28 crc kubenswrapper[4792]: I1127 18:08:28.064246 4792 generic.go:334] "Generic (PLEG): container finished" podID="16d5cbcc-a900-4c6f-8f6d-75bb92558711" containerID="87d85c02aa13ae8efe07710d1c4d38e4438f860b807a2b135249c03d0b62fc56" exitCode=0 Nov 27 18:08:28 crc kubenswrapper[4792]: I1127 18:08:28.064873 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg6c9" event={"ID":"16d5cbcc-a900-4c6f-8f6d-75bb92558711","Type":"ContainerDied","Data":"87d85c02aa13ae8efe07710d1c4d38e4438f860b807a2b135249c03d0b62fc56"} Nov 27 18:08:29 crc kubenswrapper[4792]: I1127 18:08:29.076122 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg6c9" event={"ID":"16d5cbcc-a900-4c6f-8f6d-75bb92558711","Type":"ContainerStarted","Data":"22199f2f669e30c3a5ecd66bb5d0514baea4ce105d17c65ad0129b7007c7d4d3"} Nov 27 18:08:29 crc kubenswrapper[4792]: I1127 18:08:29.096254 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vg6c9" podStartSLOduration=2.48383107 podStartE2EDuration="9.096215555s" podCreationTimestamp="2025-11-27 18:08:20 +0000 UTC" firstStartedPulling="2025-11-27 18:08:21.982438207 +0000 UTC m=+3524.325264515" lastFinishedPulling="2025-11-27 18:08:28.594822672 +0000 UTC m=+3530.937649000" observedRunningTime="2025-11-27 18:08:29.09201959 +0000 UTC m=+3531.434845918" watchObservedRunningTime="2025-11-27 18:08:29.096215555 +0000 UTC m=+3531.439041873" Nov 27 18:08:30 crc kubenswrapper[4792]: I1127 18:08:30.491929 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:30 crc kubenswrapper[4792]: I1127 18:08:30.493007 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:31 crc kubenswrapper[4792]: I1127 18:08:31.554081 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vg6c9" podUID="16d5cbcc-a900-4c6f-8f6d-75bb92558711" containerName="registry-server" probeResult="failure" output=< Nov 27 18:08:31 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:08:31 crc kubenswrapper[4792]: > Nov 27 18:08:38 crc kubenswrapper[4792]: I1127 18:08:38.668815 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:08:38 crc kubenswrapper[4792]: I1127 18:08:38.669277 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:08:41 crc kubenswrapper[4792]: I1127 18:08:41.547763 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vg6c9" podUID="16d5cbcc-a900-4c6f-8f6d-75bb92558711" containerName="registry-server" probeResult="failure" output=< Nov 27 18:08:41 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:08:41 crc kubenswrapper[4792]: > Nov 27 18:08:50 crc kubenswrapper[4792]: I1127 18:08:50.550467 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:50 crc kubenswrapper[4792]: I1127 18:08:50.605743 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:51 crc kubenswrapper[4792]: I1127 18:08:51.363761 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vg6c9"] Nov 27 18:08:51 crc kubenswrapper[4792]: I1127 18:08:51.858331 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vg6c9" podUID="16d5cbcc-a900-4c6f-8f6d-75bb92558711" containerName="registry-server" containerID="cri-o://22199f2f669e30c3a5ecd66bb5d0514baea4ce105d17c65ad0129b7007c7d4d3" gracePeriod=2 Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.628540 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.636119 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54m7q\" (UniqueName: \"kubernetes.io/projected/16d5cbcc-a900-4c6f-8f6d-75bb92558711-kube-api-access-54m7q\") pod \"16d5cbcc-a900-4c6f-8f6d-75bb92558711\" (UID: \"16d5cbcc-a900-4c6f-8f6d-75bb92558711\") " Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.636365 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16d5cbcc-a900-4c6f-8f6d-75bb92558711-catalog-content\") pod \"16d5cbcc-a900-4c6f-8f6d-75bb92558711\" (UID: \"16d5cbcc-a900-4c6f-8f6d-75bb92558711\") " Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.636492 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16d5cbcc-a900-4c6f-8f6d-75bb92558711-utilities\") pod \"16d5cbcc-a900-4c6f-8f6d-75bb92558711\" (UID: \"16d5cbcc-a900-4c6f-8f6d-75bb92558711\") " Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.637131 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16d5cbcc-a900-4c6f-8f6d-75bb92558711-utilities" (OuterVolumeSpecName: "utilities") pod "16d5cbcc-a900-4c6f-8f6d-75bb92558711" (UID: "16d5cbcc-a900-4c6f-8f6d-75bb92558711"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.637915 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16d5cbcc-a900-4c6f-8f6d-75bb92558711-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.643859 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16d5cbcc-a900-4c6f-8f6d-75bb92558711-kube-api-access-54m7q" (OuterVolumeSpecName: "kube-api-access-54m7q") pod "16d5cbcc-a900-4c6f-8f6d-75bb92558711" (UID: "16d5cbcc-a900-4c6f-8f6d-75bb92558711"). InnerVolumeSpecName "kube-api-access-54m7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.748234 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54m7q\" (UniqueName: \"kubernetes.io/projected/16d5cbcc-a900-4c6f-8f6d-75bb92558711-kube-api-access-54m7q\") on node \"crc\" DevicePath \"\"" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.770167 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16d5cbcc-a900-4c6f-8f6d-75bb92558711-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16d5cbcc-a900-4c6f-8f6d-75bb92558711" (UID: "16d5cbcc-a900-4c6f-8f6d-75bb92558711"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.850960 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16d5cbcc-a900-4c6f-8f6d-75bb92558711-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.870265 4792 generic.go:334] "Generic (PLEG): container finished" podID="16d5cbcc-a900-4c6f-8f6d-75bb92558711" containerID="22199f2f669e30c3a5ecd66bb5d0514baea4ce105d17c65ad0129b7007c7d4d3" exitCode=0 Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.870336 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg6c9" event={"ID":"16d5cbcc-a900-4c6f-8f6d-75bb92558711","Type":"ContainerDied","Data":"22199f2f669e30c3a5ecd66bb5d0514baea4ce105d17c65ad0129b7007c7d4d3"} Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.870367 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg6c9" event={"ID":"16d5cbcc-a900-4c6f-8f6d-75bb92558711","Type":"ContainerDied","Data":"5c06a75648c1a564a8483105f32f03c47182f596b24c7aa083006763300afc03"} Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.870386 4792 scope.go:117] "RemoveContainer" containerID="22199f2f669e30c3a5ecd66bb5d0514baea4ce105d17c65ad0129b7007c7d4d3" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.870408 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg6c9" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.905434 4792 scope.go:117] "RemoveContainer" containerID="87d85c02aa13ae8efe07710d1c4d38e4438f860b807a2b135249c03d0b62fc56" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.917268 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vg6c9"] Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.929625 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vg6c9"] Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.942455 4792 scope.go:117] "RemoveContainer" containerID="fdb30a5376944e2842db86648ed69aecf5e25b14d251c223886e5a22e1d9e89e" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.984564 4792 scope.go:117] "RemoveContainer" containerID="22199f2f669e30c3a5ecd66bb5d0514baea4ce105d17c65ad0129b7007c7d4d3" Nov 27 18:08:52 crc kubenswrapper[4792]: E1127 18:08:52.985237 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22199f2f669e30c3a5ecd66bb5d0514baea4ce105d17c65ad0129b7007c7d4d3\": container with ID starting with 22199f2f669e30c3a5ecd66bb5d0514baea4ce105d17c65ad0129b7007c7d4d3 not found: ID does not exist" containerID="22199f2f669e30c3a5ecd66bb5d0514baea4ce105d17c65ad0129b7007c7d4d3" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.985337 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22199f2f669e30c3a5ecd66bb5d0514baea4ce105d17c65ad0129b7007c7d4d3"} err="failed to get container status \"22199f2f669e30c3a5ecd66bb5d0514baea4ce105d17c65ad0129b7007c7d4d3\": rpc error: code = NotFound desc = could not find container \"22199f2f669e30c3a5ecd66bb5d0514baea4ce105d17c65ad0129b7007c7d4d3\": container with ID starting with 22199f2f669e30c3a5ecd66bb5d0514baea4ce105d17c65ad0129b7007c7d4d3 not found: ID does not exist" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.985417 4792 scope.go:117] "RemoveContainer" containerID="87d85c02aa13ae8efe07710d1c4d38e4438f860b807a2b135249c03d0b62fc56" Nov 27 18:08:52 crc kubenswrapper[4792]: E1127 18:08:52.986001 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87d85c02aa13ae8efe07710d1c4d38e4438f860b807a2b135249c03d0b62fc56\": container with ID starting with 87d85c02aa13ae8efe07710d1c4d38e4438f860b807a2b135249c03d0b62fc56 not found: ID does not exist" containerID="87d85c02aa13ae8efe07710d1c4d38e4438f860b807a2b135249c03d0b62fc56" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.986062 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87d85c02aa13ae8efe07710d1c4d38e4438f860b807a2b135249c03d0b62fc56"} err="failed to get container status \"87d85c02aa13ae8efe07710d1c4d38e4438f860b807a2b135249c03d0b62fc56\": rpc error: code = NotFound desc = could not find container \"87d85c02aa13ae8efe07710d1c4d38e4438f860b807a2b135249c03d0b62fc56\": container with ID starting with 87d85c02aa13ae8efe07710d1c4d38e4438f860b807a2b135249c03d0b62fc56 not found: ID does not exist" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.986094 4792 scope.go:117] "RemoveContainer" containerID="fdb30a5376944e2842db86648ed69aecf5e25b14d251c223886e5a22e1d9e89e" Nov 27 18:08:52 crc kubenswrapper[4792]: E1127 18:08:52.986515 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdb30a5376944e2842db86648ed69aecf5e25b14d251c223886e5a22e1d9e89e\": container with ID starting with fdb30a5376944e2842db86648ed69aecf5e25b14d251c223886e5a22e1d9e89e not found: ID does not exist" containerID="fdb30a5376944e2842db86648ed69aecf5e25b14d251c223886e5a22e1d9e89e" Nov 27 18:08:52 crc kubenswrapper[4792]: I1127 18:08:52.986615 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdb30a5376944e2842db86648ed69aecf5e25b14d251c223886e5a22e1d9e89e"} err="failed to get container status \"fdb30a5376944e2842db86648ed69aecf5e25b14d251c223886e5a22e1d9e89e\": rpc error: code = NotFound desc = could not find container \"fdb30a5376944e2842db86648ed69aecf5e25b14d251c223886e5a22e1d9e89e\": container with ID starting with fdb30a5376944e2842db86648ed69aecf5e25b14d251c223886e5a22e1d9e89e not found: ID does not exist" Nov 27 18:08:54 crc kubenswrapper[4792]: I1127 18:08:54.703217 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16d5cbcc-a900-4c6f-8f6d-75bb92558711" path="/var/lib/kubelet/pods/16d5cbcc-a900-4c6f-8f6d-75bb92558711/volumes" Nov 27 18:09:08 crc kubenswrapper[4792]: I1127 18:09:08.290220 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:09:08 crc kubenswrapper[4792]: I1127 18:09:08.290796 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:09:38 crc kubenswrapper[4792]: I1127 18:09:38.289739 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:09:38 crc kubenswrapper[4792]: I1127 18:09:38.290290 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:09:38 crc kubenswrapper[4792]: I1127 18:09:38.290335 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 18:09:38 crc kubenswrapper[4792]: I1127 18:09:38.291140 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 18:09:38 crc kubenswrapper[4792]: I1127 18:09:38.291198 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" gracePeriod=600 Nov 27 18:09:38 crc kubenswrapper[4792]: E1127 18:09:38.414459 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:09:39 crc kubenswrapper[4792]: I1127 18:09:39.412832 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" exitCode=0 Nov 27 18:09:39 crc kubenswrapper[4792]: I1127 18:09:39.412904 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af"} Nov 27 18:09:39 crc kubenswrapper[4792]: I1127 18:09:39.413284 4792 scope.go:117] "RemoveContainer" containerID="1cc2c071d8c4b3c3b316fd6fc71964bdc58502394551a6a0851392440fb5aff4" Nov 27 18:09:39 crc kubenswrapper[4792]: I1127 18:09:39.414219 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:09:39 crc kubenswrapper[4792]: E1127 18:09:39.414790 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:09:51 crc kubenswrapper[4792]: I1127 18:09:51.686571 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:09:51 crc kubenswrapper[4792]: E1127 18:09:51.687395 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:10:02 crc kubenswrapper[4792]: I1127 18:10:02.686594 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:10:02 crc kubenswrapper[4792]: E1127 18:10:02.687503 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.063974 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pxfnw"] Nov 27 18:10:07 crc kubenswrapper[4792]: E1127 18:10:07.064902 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d5cbcc-a900-4c6f-8f6d-75bb92558711" containerName="registry-server" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.064921 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d5cbcc-a900-4c6f-8f6d-75bb92558711" containerName="registry-server" Nov 27 18:10:07 crc kubenswrapper[4792]: E1127 18:10:07.064945 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d5cbcc-a900-4c6f-8f6d-75bb92558711" containerName="extract-content" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.064953 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d5cbcc-a900-4c6f-8f6d-75bb92558711" containerName="extract-content" Nov 27 18:10:07 crc kubenswrapper[4792]: E1127 18:10:07.065009 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d5cbcc-a900-4c6f-8f6d-75bb92558711" containerName="extract-utilities" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.065020 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d5cbcc-a900-4c6f-8f6d-75bb92558711" containerName="extract-utilities" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.065289 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="16d5cbcc-a900-4c6f-8f6d-75bb92558711" containerName="registry-server" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.067701 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.108857 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxfnw"] Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.172314 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs64w\" (UniqueName: \"kubernetes.io/projected/8ccee955-08e3-4ff7-9a51-88addbfc0904-kube-api-access-bs64w\") pod \"redhat-marketplace-pxfnw\" (UID: \"8ccee955-08e3-4ff7-9a51-88addbfc0904\") " pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.172756 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ccee955-08e3-4ff7-9a51-88addbfc0904-catalog-content\") pod \"redhat-marketplace-pxfnw\" (UID: \"8ccee955-08e3-4ff7-9a51-88addbfc0904\") " pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.172841 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ccee955-08e3-4ff7-9a51-88addbfc0904-utilities\") pod \"redhat-marketplace-pxfnw\" (UID: \"8ccee955-08e3-4ff7-9a51-88addbfc0904\") " pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.275042 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs64w\" (UniqueName: \"kubernetes.io/projected/8ccee955-08e3-4ff7-9a51-88addbfc0904-kube-api-access-bs64w\") pod \"redhat-marketplace-pxfnw\" (UID: \"8ccee955-08e3-4ff7-9a51-88addbfc0904\") " pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.275237 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ccee955-08e3-4ff7-9a51-88addbfc0904-catalog-content\") pod \"redhat-marketplace-pxfnw\" (UID: \"8ccee955-08e3-4ff7-9a51-88addbfc0904\") " pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.275266 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ccee955-08e3-4ff7-9a51-88addbfc0904-utilities\") pod \"redhat-marketplace-pxfnw\" (UID: \"8ccee955-08e3-4ff7-9a51-88addbfc0904\") " pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.276008 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ccee955-08e3-4ff7-9a51-88addbfc0904-catalog-content\") pod \"redhat-marketplace-pxfnw\" (UID: \"8ccee955-08e3-4ff7-9a51-88addbfc0904\") " pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.276044 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ccee955-08e3-4ff7-9a51-88addbfc0904-utilities\") pod \"redhat-marketplace-pxfnw\" (UID: \"8ccee955-08e3-4ff7-9a51-88addbfc0904\") " pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.298833 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs64w\" (UniqueName: \"kubernetes.io/projected/8ccee955-08e3-4ff7-9a51-88addbfc0904-kube-api-access-bs64w\") pod \"redhat-marketplace-pxfnw\" (UID: \"8ccee955-08e3-4ff7-9a51-88addbfc0904\") " pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.391991 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:07 crc kubenswrapper[4792]: I1127 18:10:07.939845 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxfnw"] Nov 27 18:10:08 crc kubenswrapper[4792]: I1127 18:10:08.791281 4792 generic.go:334] "Generic (PLEG): container finished" podID="8ccee955-08e3-4ff7-9a51-88addbfc0904" containerID="ea59769e85362d33c74ef4bae82dbea67e0a75090af4717540eaf791df47261e" exitCode=0 Nov 27 18:10:08 crc kubenswrapper[4792]: I1127 18:10:08.791352 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxfnw" event={"ID":"8ccee955-08e3-4ff7-9a51-88addbfc0904","Type":"ContainerDied","Data":"ea59769e85362d33c74ef4bae82dbea67e0a75090af4717540eaf791df47261e"} Nov 27 18:10:08 crc kubenswrapper[4792]: I1127 18:10:08.792031 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxfnw" event={"ID":"8ccee955-08e3-4ff7-9a51-88addbfc0904","Type":"ContainerStarted","Data":"655e46a94ca106e642cb40c58f95ec55f2d253f62c65f19759fc086e2d7024fa"} Nov 27 18:10:08 crc kubenswrapper[4792]: I1127 18:10:08.794815 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 18:10:10 crc kubenswrapper[4792]: I1127 18:10:10.817857 4792 generic.go:334] "Generic (PLEG): container finished" podID="8ccee955-08e3-4ff7-9a51-88addbfc0904" containerID="0e5e252193ae5ee4fefd8c3d45a6d337899ab7c7945b82cad3d56af294eb735f" exitCode=0 Nov 27 18:10:10 crc kubenswrapper[4792]: I1127 18:10:10.817926 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxfnw" event={"ID":"8ccee955-08e3-4ff7-9a51-88addbfc0904","Type":"ContainerDied","Data":"0e5e252193ae5ee4fefd8c3d45a6d337899ab7c7945b82cad3d56af294eb735f"} Nov 27 18:10:11 crc kubenswrapper[4792]: I1127 18:10:11.832960 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxfnw" event={"ID":"8ccee955-08e3-4ff7-9a51-88addbfc0904","Type":"ContainerStarted","Data":"832041bd04b28519c6280b410226e5df62fe7b61cf39d8ffd53acc31a64b5c7b"} Nov 27 18:10:11 crc kubenswrapper[4792]: I1127 18:10:11.856264 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pxfnw" podStartSLOduration=2.435592241 podStartE2EDuration="4.856242947s" podCreationTimestamp="2025-11-27 18:10:07 +0000 UTC" firstStartedPulling="2025-11-27 18:10:08.794537187 +0000 UTC m=+3631.137363505" lastFinishedPulling="2025-11-27 18:10:11.215187893 +0000 UTC m=+3633.558014211" observedRunningTime="2025-11-27 18:10:11.853622942 +0000 UTC m=+3634.196449260" watchObservedRunningTime="2025-11-27 18:10:11.856242947 +0000 UTC m=+3634.199069265" Nov 27 18:10:13 crc kubenswrapper[4792]: I1127 18:10:13.687140 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:10:13 crc kubenswrapper[4792]: E1127 18:10:13.687907 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:10:17 crc kubenswrapper[4792]: I1127 18:10:17.392972 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:17 crc kubenswrapper[4792]: I1127 18:10:17.393567 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:17 crc kubenswrapper[4792]: I1127 18:10:17.449733 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:17 crc kubenswrapper[4792]: I1127 18:10:17.963067 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:18 crc kubenswrapper[4792]: I1127 18:10:18.016753 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxfnw"] Nov 27 18:10:19 crc kubenswrapper[4792]: I1127 18:10:19.918945 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pxfnw" podUID="8ccee955-08e3-4ff7-9a51-88addbfc0904" containerName="registry-server" containerID="cri-o://832041bd04b28519c6280b410226e5df62fe7b61cf39d8ffd53acc31a64b5c7b" gracePeriod=2 Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.533017 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.731612 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs64w\" (UniqueName: \"kubernetes.io/projected/8ccee955-08e3-4ff7-9a51-88addbfc0904-kube-api-access-bs64w\") pod \"8ccee955-08e3-4ff7-9a51-88addbfc0904\" (UID: \"8ccee955-08e3-4ff7-9a51-88addbfc0904\") " Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.732271 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ccee955-08e3-4ff7-9a51-88addbfc0904-catalog-content\") pod \"8ccee955-08e3-4ff7-9a51-88addbfc0904\" (UID: \"8ccee955-08e3-4ff7-9a51-88addbfc0904\") " Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.732314 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ccee955-08e3-4ff7-9a51-88addbfc0904-utilities\") pod \"8ccee955-08e3-4ff7-9a51-88addbfc0904\" (UID: \"8ccee955-08e3-4ff7-9a51-88addbfc0904\") " Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.733100 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ccee955-08e3-4ff7-9a51-88addbfc0904-utilities" (OuterVolumeSpecName: "utilities") pod "8ccee955-08e3-4ff7-9a51-88addbfc0904" (UID: "8ccee955-08e3-4ff7-9a51-88addbfc0904"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.738251 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ccee955-08e3-4ff7-9a51-88addbfc0904-kube-api-access-bs64w" (OuterVolumeSpecName: "kube-api-access-bs64w") pod "8ccee955-08e3-4ff7-9a51-88addbfc0904" (UID: "8ccee955-08e3-4ff7-9a51-88addbfc0904"). InnerVolumeSpecName "kube-api-access-bs64w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.754794 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ccee955-08e3-4ff7-9a51-88addbfc0904-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ccee955-08e3-4ff7-9a51-88addbfc0904" (UID: "8ccee955-08e3-4ff7-9a51-88addbfc0904"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.835585 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs64w\" (UniqueName: \"kubernetes.io/projected/8ccee955-08e3-4ff7-9a51-88addbfc0904-kube-api-access-bs64w\") on node \"crc\" DevicePath \"\"" Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.835684 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ccee955-08e3-4ff7-9a51-88addbfc0904-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.835704 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ccee955-08e3-4ff7-9a51-88addbfc0904-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.941868 4792 generic.go:334] "Generic (PLEG): container finished" podID="8ccee955-08e3-4ff7-9a51-88addbfc0904" containerID="832041bd04b28519c6280b410226e5df62fe7b61cf39d8ffd53acc31a64b5c7b" exitCode=0 Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.941906 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxfnw" event={"ID":"8ccee955-08e3-4ff7-9a51-88addbfc0904","Type":"ContainerDied","Data":"832041bd04b28519c6280b410226e5df62fe7b61cf39d8ffd53acc31a64b5c7b"} Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.941990 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxfnw" event={"ID":"8ccee955-08e3-4ff7-9a51-88addbfc0904","Type":"ContainerDied","Data":"655e46a94ca106e642cb40c58f95ec55f2d253f62c65f19759fc086e2d7024fa"} Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.942012 4792 scope.go:117] "RemoveContainer" containerID="832041bd04b28519c6280b410226e5df62fe7b61cf39d8ffd53acc31a64b5c7b" Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.941921 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxfnw" Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.969772 4792 scope.go:117] "RemoveContainer" containerID="0e5e252193ae5ee4fefd8c3d45a6d337899ab7c7945b82cad3d56af294eb735f" Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.978814 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxfnw"] Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.990934 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxfnw"] Nov 27 18:10:20 crc kubenswrapper[4792]: I1127 18:10:20.999010 4792 scope.go:117] "RemoveContainer" containerID="ea59769e85362d33c74ef4bae82dbea67e0a75090af4717540eaf791df47261e" Nov 27 18:10:21 crc kubenswrapper[4792]: I1127 18:10:21.073476 4792 scope.go:117] "RemoveContainer" containerID="832041bd04b28519c6280b410226e5df62fe7b61cf39d8ffd53acc31a64b5c7b" Nov 27 18:10:21 crc kubenswrapper[4792]: E1127 18:10:21.074556 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"832041bd04b28519c6280b410226e5df62fe7b61cf39d8ffd53acc31a64b5c7b\": container with ID starting with 832041bd04b28519c6280b410226e5df62fe7b61cf39d8ffd53acc31a64b5c7b not found: ID does not exist" containerID="832041bd04b28519c6280b410226e5df62fe7b61cf39d8ffd53acc31a64b5c7b" Nov 27 18:10:21 crc kubenswrapper[4792]: I1127 18:10:21.074591 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"832041bd04b28519c6280b410226e5df62fe7b61cf39d8ffd53acc31a64b5c7b"} err="failed to get container status \"832041bd04b28519c6280b410226e5df62fe7b61cf39d8ffd53acc31a64b5c7b\": rpc error: code = NotFound desc = could not find container \"832041bd04b28519c6280b410226e5df62fe7b61cf39d8ffd53acc31a64b5c7b\": container with ID starting with 832041bd04b28519c6280b410226e5df62fe7b61cf39d8ffd53acc31a64b5c7b not found: ID does not exist" Nov 27 18:10:21 crc kubenswrapper[4792]: I1127 18:10:21.074630 4792 scope.go:117] "RemoveContainer" containerID="0e5e252193ae5ee4fefd8c3d45a6d337899ab7c7945b82cad3d56af294eb735f" Nov 27 18:10:21 crc kubenswrapper[4792]: E1127 18:10:21.076201 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e5e252193ae5ee4fefd8c3d45a6d337899ab7c7945b82cad3d56af294eb735f\": container with ID starting with 0e5e252193ae5ee4fefd8c3d45a6d337899ab7c7945b82cad3d56af294eb735f not found: ID does not exist" containerID="0e5e252193ae5ee4fefd8c3d45a6d337899ab7c7945b82cad3d56af294eb735f" Nov 27 18:10:21 crc kubenswrapper[4792]: I1127 18:10:21.076256 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e5e252193ae5ee4fefd8c3d45a6d337899ab7c7945b82cad3d56af294eb735f"} err="failed to get container status \"0e5e252193ae5ee4fefd8c3d45a6d337899ab7c7945b82cad3d56af294eb735f\": rpc error: code = NotFound desc = could not find container \"0e5e252193ae5ee4fefd8c3d45a6d337899ab7c7945b82cad3d56af294eb735f\": container with ID starting with 0e5e252193ae5ee4fefd8c3d45a6d337899ab7c7945b82cad3d56af294eb735f not found: ID does not exist" Nov 27 18:10:21 crc kubenswrapper[4792]: I1127 18:10:21.076292 4792 scope.go:117] "RemoveContainer" containerID="ea59769e85362d33c74ef4bae82dbea67e0a75090af4717540eaf791df47261e" Nov 27 18:10:21 crc kubenswrapper[4792]: E1127 18:10:21.077727 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea59769e85362d33c74ef4bae82dbea67e0a75090af4717540eaf791df47261e\": container with ID starting with ea59769e85362d33c74ef4bae82dbea67e0a75090af4717540eaf791df47261e not found: ID does not exist" containerID="ea59769e85362d33c74ef4bae82dbea67e0a75090af4717540eaf791df47261e" Nov 27 18:10:21 crc kubenswrapper[4792]: I1127 18:10:21.077797 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea59769e85362d33c74ef4bae82dbea67e0a75090af4717540eaf791df47261e"} err="failed to get container status \"ea59769e85362d33c74ef4bae82dbea67e0a75090af4717540eaf791df47261e\": rpc error: code = NotFound desc = could not find container \"ea59769e85362d33c74ef4bae82dbea67e0a75090af4717540eaf791df47261e\": container with ID starting with ea59769e85362d33c74ef4bae82dbea67e0a75090af4717540eaf791df47261e not found: ID does not exist" Nov 27 18:10:22 crc kubenswrapper[4792]: I1127 18:10:22.699175 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ccee955-08e3-4ff7-9a51-88addbfc0904" path="/var/lib/kubelet/pods/8ccee955-08e3-4ff7-9a51-88addbfc0904/volumes" Nov 27 18:10:26 crc kubenswrapper[4792]: I1127 18:10:26.687792 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:10:26 crc kubenswrapper[4792]: E1127 18:10:26.688567 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:10:38 crc kubenswrapper[4792]: I1127 18:10:38.699175 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:10:38 crc kubenswrapper[4792]: E1127 18:10:38.700243 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:10:49 crc kubenswrapper[4792]: I1127 18:10:49.687822 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:10:49 crc kubenswrapper[4792]: E1127 18:10:49.688939 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:11:03 crc kubenswrapper[4792]: I1127 18:11:03.688728 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:11:03 crc kubenswrapper[4792]: E1127 18:11:03.689571 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:11:16 crc kubenswrapper[4792]: I1127 18:11:16.687473 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:11:16 crc kubenswrapper[4792]: E1127 18:11:16.688427 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:11:29 crc kubenswrapper[4792]: I1127 18:11:29.687210 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:11:29 crc kubenswrapper[4792]: E1127 18:11:29.688176 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:11:44 crc kubenswrapper[4792]: I1127 18:11:44.687358 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:11:44 crc kubenswrapper[4792]: E1127 18:11:44.688360 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:11:55 crc kubenswrapper[4792]: I1127 18:11:55.688773 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:11:55 crc kubenswrapper[4792]: E1127 18:11:55.689588 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:12:10 crc kubenswrapper[4792]: I1127 18:12:10.687254 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:12:10 crc kubenswrapper[4792]: E1127 18:12:10.688087 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:12:22 crc kubenswrapper[4792]: I1127 18:12:22.687438 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:12:22 crc kubenswrapper[4792]: E1127 18:12:22.688450 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:12:37 crc kubenswrapper[4792]: I1127 18:12:37.687284 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:12:37 crc kubenswrapper[4792]: E1127 18:12:37.688091 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:12:49 crc kubenswrapper[4792]: I1127 18:12:49.688583 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:12:49 crc kubenswrapper[4792]: E1127 18:12:49.689631 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:13:04 crc kubenswrapper[4792]: I1127 18:13:04.686548 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:13:04 crc kubenswrapper[4792]: E1127 18:13:04.687359 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:13:18 crc kubenswrapper[4792]: I1127 18:13:18.695812 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:13:18 crc kubenswrapper[4792]: E1127 18:13:18.696729 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:13:30 crc kubenswrapper[4792]: I1127 18:13:30.686901 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:13:30 crc kubenswrapper[4792]: E1127 18:13:30.687788 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:13:43 crc kubenswrapper[4792]: I1127 18:13:43.687425 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:13:43 crc kubenswrapper[4792]: E1127 18:13:43.689263 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:13:56 crc kubenswrapper[4792]: I1127 18:13:56.882022 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-55455cb8cf-gtjxc" podUID="9ace987a-3f62-48ce-8c4b-b9c50cd2a29e" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 27 18:13:57 crc kubenswrapper[4792]: I1127 18:13:57.687545 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:13:57 crc kubenswrapper[4792]: E1127 18:13:57.688152 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:14:10 crc kubenswrapper[4792]: I1127 18:14:10.687180 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:14:10 crc kubenswrapper[4792]: E1127 18:14:10.688204 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:14:11 crc kubenswrapper[4792]: I1127 18:14:11.981600 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tp9gt"] Nov 27 18:14:11 crc kubenswrapper[4792]: E1127 18:14:11.982322 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccee955-08e3-4ff7-9a51-88addbfc0904" containerName="registry-server" Nov 27 18:14:11 crc kubenswrapper[4792]: I1127 18:14:11.982343 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccee955-08e3-4ff7-9a51-88addbfc0904" containerName="registry-server" Nov 27 18:14:11 crc kubenswrapper[4792]: E1127 18:14:11.982401 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccee955-08e3-4ff7-9a51-88addbfc0904" containerName="extract-utilities" Nov 27 18:14:11 crc kubenswrapper[4792]: I1127 18:14:11.982409 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccee955-08e3-4ff7-9a51-88addbfc0904" containerName="extract-utilities" Nov 27 18:14:11 crc kubenswrapper[4792]: E1127 18:14:11.982438 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccee955-08e3-4ff7-9a51-88addbfc0904" containerName="extract-content" Nov 27 18:14:11 crc kubenswrapper[4792]: I1127 18:14:11.982445 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccee955-08e3-4ff7-9a51-88addbfc0904" containerName="extract-content" Nov 27 18:14:11 crc kubenswrapper[4792]: I1127 18:14:11.982768 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ccee955-08e3-4ff7-9a51-88addbfc0904" containerName="registry-server" Nov 27 18:14:11 crc kubenswrapper[4792]: I1127 18:14:11.984961 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:11 crc kubenswrapper[4792]: I1127 18:14:11.996095 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tp9gt"] Nov 27 18:14:12 crc kubenswrapper[4792]: I1127 18:14:12.067085 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c6107be-7e93-43ae-8603-d595705f45c4-utilities\") pod \"community-operators-tp9gt\" (UID: \"3c6107be-7e93-43ae-8603-d595705f45c4\") " pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:12 crc kubenswrapper[4792]: I1127 18:14:12.067167 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88j28\" (UniqueName: \"kubernetes.io/projected/3c6107be-7e93-43ae-8603-d595705f45c4-kube-api-access-88j28\") pod \"community-operators-tp9gt\" (UID: \"3c6107be-7e93-43ae-8603-d595705f45c4\") " pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:12 crc kubenswrapper[4792]: I1127 18:14:12.067286 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c6107be-7e93-43ae-8603-d595705f45c4-catalog-content\") pod \"community-operators-tp9gt\" (UID: \"3c6107be-7e93-43ae-8603-d595705f45c4\") " pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:12 crc kubenswrapper[4792]: I1127 18:14:12.169339 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c6107be-7e93-43ae-8603-d595705f45c4-catalog-content\") pod \"community-operators-tp9gt\" (UID: \"3c6107be-7e93-43ae-8603-d595705f45c4\") " pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:12 crc kubenswrapper[4792]: I1127 18:14:12.169463 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c6107be-7e93-43ae-8603-d595705f45c4-utilities\") pod \"community-operators-tp9gt\" (UID: \"3c6107be-7e93-43ae-8603-d595705f45c4\") " pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:12 crc kubenswrapper[4792]: I1127 18:14:12.169500 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88j28\" (UniqueName: \"kubernetes.io/projected/3c6107be-7e93-43ae-8603-d595705f45c4-kube-api-access-88j28\") pod \"community-operators-tp9gt\" (UID: \"3c6107be-7e93-43ae-8603-d595705f45c4\") " pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:12 crc kubenswrapper[4792]: I1127 18:14:12.170186 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c6107be-7e93-43ae-8603-d595705f45c4-catalog-content\") pod \"community-operators-tp9gt\" (UID: \"3c6107be-7e93-43ae-8603-d595705f45c4\") " pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:12 crc kubenswrapper[4792]: I1127 18:14:12.170393 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c6107be-7e93-43ae-8603-d595705f45c4-utilities\") pod \"community-operators-tp9gt\" (UID: \"3c6107be-7e93-43ae-8603-d595705f45c4\") " pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:12 crc kubenswrapper[4792]: I1127 18:14:12.191318 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88j28\" (UniqueName: \"kubernetes.io/projected/3c6107be-7e93-43ae-8603-d595705f45c4-kube-api-access-88j28\") pod \"community-operators-tp9gt\" (UID: \"3c6107be-7e93-43ae-8603-d595705f45c4\") " pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:12 crc kubenswrapper[4792]: I1127 18:14:12.305574 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:12 crc kubenswrapper[4792]: I1127 18:14:12.958897 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tp9gt"] Nov 27 18:14:13 crc kubenswrapper[4792]: I1127 18:14:13.529091 4792 generic.go:334] "Generic (PLEG): container finished" podID="3c6107be-7e93-43ae-8603-d595705f45c4" containerID="32841fd5749d99146601f15507fbf2c4192b6acecbfba21a70ac6e4fbd700b25" exitCode=0 Nov 27 18:14:13 crc kubenswrapper[4792]: I1127 18:14:13.529196 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tp9gt" event={"ID":"3c6107be-7e93-43ae-8603-d595705f45c4","Type":"ContainerDied","Data":"32841fd5749d99146601f15507fbf2c4192b6acecbfba21a70ac6e4fbd700b25"} Nov 27 18:14:13 crc kubenswrapper[4792]: I1127 18:14:13.529402 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tp9gt" event={"ID":"3c6107be-7e93-43ae-8603-d595705f45c4","Type":"ContainerStarted","Data":"65b8ebaf39010da0cebae272ce80b12ffd37c6beeccc9c64604d9bb509b4826f"} Nov 27 18:14:15 crc kubenswrapper[4792]: I1127 18:14:15.551152 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tp9gt" event={"ID":"3c6107be-7e93-43ae-8603-d595705f45c4","Type":"ContainerStarted","Data":"41e756fb4bd1bd99a5ec9b0d9b7fcdfe1f2932c0a7a9799ff1532f0b1270a4ab"} Nov 27 18:14:18 crc kubenswrapper[4792]: I1127 18:14:18.584447 4792 generic.go:334] "Generic (PLEG): container finished" podID="3c6107be-7e93-43ae-8603-d595705f45c4" containerID="41e756fb4bd1bd99a5ec9b0d9b7fcdfe1f2932c0a7a9799ff1532f0b1270a4ab" exitCode=0 Nov 27 18:14:18 crc kubenswrapper[4792]: I1127 18:14:18.584532 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tp9gt" event={"ID":"3c6107be-7e93-43ae-8603-d595705f45c4","Type":"ContainerDied","Data":"41e756fb4bd1bd99a5ec9b0d9b7fcdfe1f2932c0a7a9799ff1532f0b1270a4ab"} Nov 27 18:14:19 crc kubenswrapper[4792]: I1127 18:14:19.598461 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tp9gt" event={"ID":"3c6107be-7e93-43ae-8603-d595705f45c4","Type":"ContainerStarted","Data":"c7e7a9d3bdce4d54a4792ca0edb7e4340520247c5538865ef42ecc386544d3b5"} Nov 27 18:14:19 crc kubenswrapper[4792]: I1127 18:14:19.618796 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tp9gt" podStartSLOduration=2.8921836450000002 podStartE2EDuration="8.618776983s" podCreationTimestamp="2025-11-27 18:14:11 +0000 UTC" firstStartedPulling="2025-11-27 18:14:13.531420354 +0000 UTC m=+3875.874246672" lastFinishedPulling="2025-11-27 18:14:19.258013692 +0000 UTC m=+3881.600840010" observedRunningTime="2025-11-27 18:14:19.611939423 +0000 UTC m=+3881.954765761" watchObservedRunningTime="2025-11-27 18:14:19.618776983 +0000 UTC m=+3881.961603301" Nov 27 18:14:22 crc kubenswrapper[4792]: I1127 18:14:22.306449 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:22 crc kubenswrapper[4792]: I1127 18:14:22.307082 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:23 crc kubenswrapper[4792]: I1127 18:14:23.356307 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tp9gt" podUID="3c6107be-7e93-43ae-8603-d595705f45c4" containerName="registry-server" probeResult="failure" output=< Nov 27 18:14:23 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:14:23 crc kubenswrapper[4792]: > Nov 27 18:14:25 crc kubenswrapper[4792]: I1127 18:14:25.688829 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:14:25 crc kubenswrapper[4792]: E1127 18:14:25.689818 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:14:33 crc kubenswrapper[4792]: I1127 18:14:33.362830 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tp9gt" podUID="3c6107be-7e93-43ae-8603-d595705f45c4" containerName="registry-server" probeResult="failure" output=< Nov 27 18:14:33 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:14:33 crc kubenswrapper[4792]: > Nov 27 18:14:37 crc kubenswrapper[4792]: I1127 18:14:37.686393 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:14:37 crc kubenswrapper[4792]: E1127 18:14:37.687183 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:14:42 crc kubenswrapper[4792]: I1127 18:14:42.408898 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:42 crc kubenswrapper[4792]: I1127 18:14:42.481127 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:45 crc kubenswrapper[4792]: I1127 18:14:45.386554 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tp9gt"] Nov 27 18:14:45 crc kubenswrapper[4792]: I1127 18:14:45.387269 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tp9gt" podUID="3c6107be-7e93-43ae-8603-d595705f45c4" containerName="registry-server" containerID="cri-o://c7e7a9d3bdce4d54a4792ca0edb7e4340520247c5538865ef42ecc386544d3b5" gracePeriod=2 Nov 27 18:14:45 crc kubenswrapper[4792]: I1127 18:14:45.875549 4792 generic.go:334] "Generic (PLEG): container finished" podID="3c6107be-7e93-43ae-8603-d595705f45c4" containerID="c7e7a9d3bdce4d54a4792ca0edb7e4340520247c5538865ef42ecc386544d3b5" exitCode=0 Nov 27 18:14:45 crc kubenswrapper[4792]: I1127 18:14:45.875653 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tp9gt" event={"ID":"3c6107be-7e93-43ae-8603-d595705f45c4","Type":"ContainerDied","Data":"c7e7a9d3bdce4d54a4792ca0edb7e4340520247c5538865ef42ecc386544d3b5"} Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.020694 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.116021 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c6107be-7e93-43ae-8603-d595705f45c4-utilities\") pod \"3c6107be-7e93-43ae-8603-d595705f45c4\" (UID: \"3c6107be-7e93-43ae-8603-d595705f45c4\") " Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.116894 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88j28\" (UniqueName: \"kubernetes.io/projected/3c6107be-7e93-43ae-8603-d595705f45c4-kube-api-access-88j28\") pod \"3c6107be-7e93-43ae-8603-d595705f45c4\" (UID: \"3c6107be-7e93-43ae-8603-d595705f45c4\") " Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.116976 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c6107be-7e93-43ae-8603-d595705f45c4-catalog-content\") pod \"3c6107be-7e93-43ae-8603-d595705f45c4\" (UID: \"3c6107be-7e93-43ae-8603-d595705f45c4\") " Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.119195 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c6107be-7e93-43ae-8603-d595705f45c4-utilities" (OuterVolumeSpecName: "utilities") pod "3c6107be-7e93-43ae-8603-d595705f45c4" (UID: "3c6107be-7e93-43ae-8603-d595705f45c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.128590 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6107be-7e93-43ae-8603-d595705f45c4-kube-api-access-88j28" (OuterVolumeSpecName: "kube-api-access-88j28") pod "3c6107be-7e93-43ae-8603-d595705f45c4" (UID: "3c6107be-7e93-43ae-8603-d595705f45c4"). InnerVolumeSpecName "kube-api-access-88j28". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.181542 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c6107be-7e93-43ae-8603-d595705f45c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c6107be-7e93-43ae-8603-d595705f45c4" (UID: "3c6107be-7e93-43ae-8603-d595705f45c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.221687 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c6107be-7e93-43ae-8603-d595705f45c4-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.221747 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88j28\" (UniqueName: \"kubernetes.io/projected/3c6107be-7e93-43ae-8603-d595705f45c4-kube-api-access-88j28\") on node \"crc\" DevicePath \"\"" Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.221765 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c6107be-7e93-43ae-8603-d595705f45c4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.888171 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tp9gt" event={"ID":"3c6107be-7e93-43ae-8603-d595705f45c4","Type":"ContainerDied","Data":"65b8ebaf39010da0cebae272ce80b12ffd37c6beeccc9c64604d9bb509b4826f"} Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.888244 4792 scope.go:117] "RemoveContainer" containerID="c7e7a9d3bdce4d54a4792ca0edb7e4340520247c5538865ef42ecc386544d3b5" Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.888287 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tp9gt" Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.916440 4792 scope.go:117] "RemoveContainer" containerID="41e756fb4bd1bd99a5ec9b0d9b7fcdfe1f2932c0a7a9799ff1532f0b1270a4ab" Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.920319 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tp9gt"] Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.935418 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tp9gt"] Nov 27 18:14:46 crc kubenswrapper[4792]: I1127 18:14:46.937690 4792 scope.go:117] "RemoveContainer" containerID="32841fd5749d99146601f15507fbf2c4192b6acecbfba21a70ac6e4fbd700b25" Nov 27 18:14:48 crc kubenswrapper[4792]: I1127 18:14:48.699331 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c6107be-7e93-43ae-8603-d595705f45c4" path="/var/lib/kubelet/pods/3c6107be-7e93-43ae-8603-d595705f45c4/volumes" Nov 27 18:14:50 crc kubenswrapper[4792]: I1127 18:14:50.686726 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:14:51 crc kubenswrapper[4792]: I1127 18:14:51.946605 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"0374505357be39e7aa0b55631a44de90e826c04f3723252e953501ac8661747f"} Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.188357 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd"] Nov 27 18:15:00 crc kubenswrapper[4792]: E1127 18:15:00.189494 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6107be-7e93-43ae-8603-d595705f45c4" containerName="extract-content" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.189514 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6107be-7e93-43ae-8603-d595705f45c4" containerName="extract-content" Nov 27 18:15:00 crc kubenswrapper[4792]: E1127 18:15:00.189553 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6107be-7e93-43ae-8603-d595705f45c4" containerName="extract-utilities" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.189560 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6107be-7e93-43ae-8603-d595705f45c4" containerName="extract-utilities" Nov 27 18:15:00 crc kubenswrapper[4792]: E1127 18:15:00.189573 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6107be-7e93-43ae-8603-d595705f45c4" containerName="registry-server" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.189579 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6107be-7e93-43ae-8603-d595705f45c4" containerName="registry-server" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.189840 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6107be-7e93-43ae-8603-d595705f45c4" containerName="registry-server" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.190690 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.192761 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.193679 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.214777 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd"] Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.258879 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2qqd\" (UniqueName: \"kubernetes.io/projected/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-kube-api-access-k2qqd\") pod \"collect-profiles-29404455-rgbwd\" (UID: \"0c6e5c9f-cb09-413c-b804-a37b4fa3df59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.258984 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-config-volume\") pod \"collect-profiles-29404455-rgbwd\" (UID: \"0c6e5c9f-cb09-413c-b804-a37b4fa3df59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.259060 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-secret-volume\") pod \"collect-profiles-29404455-rgbwd\" (UID: \"0c6e5c9f-cb09-413c-b804-a37b4fa3df59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.361418 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2qqd\" (UniqueName: \"kubernetes.io/projected/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-kube-api-access-k2qqd\") pod \"collect-profiles-29404455-rgbwd\" (UID: \"0c6e5c9f-cb09-413c-b804-a37b4fa3df59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.361519 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-config-volume\") pod \"collect-profiles-29404455-rgbwd\" (UID: \"0c6e5c9f-cb09-413c-b804-a37b4fa3df59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.361596 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-secret-volume\") pod \"collect-profiles-29404455-rgbwd\" (UID: \"0c6e5c9f-cb09-413c-b804-a37b4fa3df59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.362619 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-config-volume\") pod \"collect-profiles-29404455-rgbwd\" (UID: \"0c6e5c9f-cb09-413c-b804-a37b4fa3df59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.371887 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-secret-volume\") pod \"collect-profiles-29404455-rgbwd\" (UID: \"0c6e5c9f-cb09-413c-b804-a37b4fa3df59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.383444 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2qqd\" (UniqueName: \"kubernetes.io/projected/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-kube-api-access-k2qqd\") pod \"collect-profiles-29404455-rgbwd\" (UID: \"0c6e5c9f-cb09-413c-b804-a37b4fa3df59\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.512358 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" Nov 27 18:15:00 crc kubenswrapper[4792]: I1127 18:15:00.996586 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd"] Nov 27 18:15:01 crc kubenswrapper[4792]: I1127 18:15:01.060624 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" event={"ID":"0c6e5c9f-cb09-413c-b804-a37b4fa3df59","Type":"ContainerStarted","Data":"d995763f1af609af02675ca2c989ffc29678942b2d59ca004e3c59e6bc59c9bc"} Nov 27 18:15:02 crc kubenswrapper[4792]: I1127 18:15:02.075876 4792 generic.go:334] "Generic (PLEG): container finished" podID="0c6e5c9f-cb09-413c-b804-a37b4fa3df59" containerID="586dd0be80654f40da1f21276eef65e0c5c2ca707c276738252e2fb278ecf1bb" exitCode=0 Nov 27 18:15:02 crc kubenswrapper[4792]: I1127 18:15:02.076081 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" event={"ID":"0c6e5c9f-cb09-413c-b804-a37b4fa3df59","Type":"ContainerDied","Data":"586dd0be80654f40da1f21276eef65e0c5c2ca707c276738252e2fb278ecf1bb"} Nov 27 18:15:03 crc kubenswrapper[4792]: I1127 18:15:03.520257 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" Nov 27 18:15:03 crc kubenswrapper[4792]: I1127 18:15:03.640615 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-config-volume\") pod \"0c6e5c9f-cb09-413c-b804-a37b4fa3df59\" (UID: \"0c6e5c9f-cb09-413c-b804-a37b4fa3df59\") " Nov 27 18:15:03 crc kubenswrapper[4792]: I1127 18:15:03.640909 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-secret-volume\") pod \"0c6e5c9f-cb09-413c-b804-a37b4fa3df59\" (UID: \"0c6e5c9f-cb09-413c-b804-a37b4fa3df59\") " Nov 27 18:15:03 crc kubenswrapper[4792]: I1127 18:15:03.641060 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2qqd\" (UniqueName: \"kubernetes.io/projected/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-kube-api-access-k2qqd\") pod \"0c6e5c9f-cb09-413c-b804-a37b4fa3df59\" (UID: \"0c6e5c9f-cb09-413c-b804-a37b4fa3df59\") " Nov 27 18:15:03 crc kubenswrapper[4792]: I1127 18:15:03.641637 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-config-volume" (OuterVolumeSpecName: "config-volume") pod "0c6e5c9f-cb09-413c-b804-a37b4fa3df59" (UID: "0c6e5c9f-cb09-413c-b804-a37b4fa3df59"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 18:15:03 crc kubenswrapper[4792]: I1127 18:15:03.642416 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-config-volume\") on node \"crc\" DevicePath \"\"" Nov 27 18:15:03 crc kubenswrapper[4792]: I1127 18:15:03.646713 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-kube-api-access-k2qqd" (OuterVolumeSpecName: "kube-api-access-k2qqd") pod "0c6e5c9f-cb09-413c-b804-a37b4fa3df59" (UID: "0c6e5c9f-cb09-413c-b804-a37b4fa3df59"). InnerVolumeSpecName "kube-api-access-k2qqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:15:03 crc kubenswrapper[4792]: I1127 18:15:03.647007 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0c6e5c9f-cb09-413c-b804-a37b4fa3df59" (UID: "0c6e5c9f-cb09-413c-b804-a37b4fa3df59"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:15:03 crc kubenswrapper[4792]: I1127 18:15:03.745436 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2qqd\" (UniqueName: \"kubernetes.io/projected/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-kube-api-access-k2qqd\") on node \"crc\" DevicePath \"\"" Nov 27 18:15:03 crc kubenswrapper[4792]: I1127 18:15:03.745472 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c6e5c9f-cb09-413c-b804-a37b4fa3df59-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 27 18:15:04 crc kubenswrapper[4792]: I1127 18:15:04.098897 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" event={"ID":"0c6e5c9f-cb09-413c-b804-a37b4fa3df59","Type":"ContainerDied","Data":"d995763f1af609af02675ca2c989ffc29678942b2d59ca004e3c59e6bc59c9bc"} Nov 27 18:15:04 crc kubenswrapper[4792]: I1127 18:15:04.099161 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d995763f1af609af02675ca2c989ffc29678942b2d59ca004e3c59e6bc59c9bc" Nov 27 18:15:04 crc kubenswrapper[4792]: I1127 18:15:04.099044 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd" Nov 27 18:15:04 crc kubenswrapper[4792]: I1127 18:15:04.612606 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56"] Nov 27 18:15:04 crc kubenswrapper[4792]: I1127 18:15:04.630233 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404410-dxc56"] Nov 27 18:15:04 crc kubenswrapper[4792]: I1127 18:15:04.706714 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f657f1d-3b19-4447-8d11-3525019b515b" path="/var/lib/kubelet/pods/3f657f1d-3b19-4447-8d11-3525019b515b/volumes" Nov 27 18:15:05 crc kubenswrapper[4792]: I1127 18:15:05.857827 4792 scope.go:117] "RemoveContainer" containerID="5420a4342779ba081e9b32a665893fd8815c358969d6a6e06c6595bd04bc1362" Nov 27 18:15:08 crc kubenswrapper[4792]: E1127 18:15:08.243886 4792 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.214:55786->38.102.83.214:33271: read tcp 38.102.83.214:55786->38.102.83.214:33271: read: connection reset by peer Nov 27 18:15:08 crc kubenswrapper[4792]: E1127 18:15:08.245768 4792 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.214:55786->38.102.83.214:33271: write tcp 38.102.83.214:55786->38.102.83.214:33271: write: broken pipe Nov 27 18:15:31 crc kubenswrapper[4792]: I1127 18:15:31.918611 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-khgp2"] Nov 27 18:15:31 crc kubenswrapper[4792]: E1127 18:15:31.922230 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6e5c9f-cb09-413c-b804-a37b4fa3df59" containerName="collect-profiles" Nov 27 18:15:31 crc kubenswrapper[4792]: I1127 18:15:31.922251 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6e5c9f-cb09-413c-b804-a37b4fa3df59" containerName="collect-profiles" Nov 27 18:15:31 crc kubenswrapper[4792]: I1127 18:15:31.922512 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6e5c9f-cb09-413c-b804-a37b4fa3df59" containerName="collect-profiles" Nov 27 18:15:31 crc kubenswrapper[4792]: I1127 18:15:31.924604 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:31 crc kubenswrapper[4792]: I1127 18:15:31.937345 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-khgp2"] Nov 27 18:15:32 crc kubenswrapper[4792]: I1127 18:15:32.110305 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srjlk\" (UniqueName: \"kubernetes.io/projected/b68d1963-13cb-479e-8946-2a1ffe534c42-kube-api-access-srjlk\") pod \"certified-operators-khgp2\" (UID: \"b68d1963-13cb-479e-8946-2a1ffe534c42\") " pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:32 crc kubenswrapper[4792]: I1127 18:15:32.110778 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68d1963-13cb-479e-8946-2a1ffe534c42-catalog-content\") pod \"certified-operators-khgp2\" (UID: \"b68d1963-13cb-479e-8946-2a1ffe534c42\") " pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:32 crc kubenswrapper[4792]: I1127 18:15:32.110867 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68d1963-13cb-479e-8946-2a1ffe534c42-utilities\") pod \"certified-operators-khgp2\" (UID: \"b68d1963-13cb-479e-8946-2a1ffe534c42\") " pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:32 crc kubenswrapper[4792]: I1127 18:15:32.212811 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68d1963-13cb-479e-8946-2a1ffe534c42-catalog-content\") pod \"certified-operators-khgp2\" (UID: \"b68d1963-13cb-479e-8946-2a1ffe534c42\") " pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:32 crc kubenswrapper[4792]: I1127 18:15:32.212896 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68d1963-13cb-479e-8946-2a1ffe534c42-utilities\") pod \"certified-operators-khgp2\" (UID: \"b68d1963-13cb-479e-8946-2a1ffe534c42\") " pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:32 crc kubenswrapper[4792]: I1127 18:15:32.213020 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srjlk\" (UniqueName: \"kubernetes.io/projected/b68d1963-13cb-479e-8946-2a1ffe534c42-kube-api-access-srjlk\") pod \"certified-operators-khgp2\" (UID: \"b68d1963-13cb-479e-8946-2a1ffe534c42\") " pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:32 crc kubenswrapper[4792]: I1127 18:15:32.213396 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68d1963-13cb-479e-8946-2a1ffe534c42-catalog-content\") pod \"certified-operators-khgp2\" (UID: \"b68d1963-13cb-479e-8946-2a1ffe534c42\") " pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:32 crc kubenswrapper[4792]: I1127 18:15:32.213439 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68d1963-13cb-479e-8946-2a1ffe534c42-utilities\") pod \"certified-operators-khgp2\" (UID: \"b68d1963-13cb-479e-8946-2a1ffe534c42\") " pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:32 crc kubenswrapper[4792]: I1127 18:15:32.240498 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srjlk\" (UniqueName: \"kubernetes.io/projected/b68d1963-13cb-479e-8946-2a1ffe534c42-kube-api-access-srjlk\") pod \"certified-operators-khgp2\" (UID: \"b68d1963-13cb-479e-8946-2a1ffe534c42\") " pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:32 crc kubenswrapper[4792]: I1127 18:15:32.250478 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:32 crc kubenswrapper[4792]: I1127 18:15:32.894841 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-khgp2"] Nov 27 18:15:33 crc kubenswrapper[4792]: I1127 18:15:33.442208 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khgp2" event={"ID":"b68d1963-13cb-479e-8946-2a1ffe534c42","Type":"ContainerStarted","Data":"b4f05c50e8970f39f4fb6dfabfe46173f75debc4c502c66643c467067e58172a"} Nov 27 18:15:33 crc kubenswrapper[4792]: I1127 18:15:33.443173 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khgp2" event={"ID":"b68d1963-13cb-479e-8946-2a1ffe534c42","Type":"ContainerStarted","Data":"e9bce641fe4bc8cd5d1e6defb2a0d06238fcd93eb9f6152824dd6239bbbdc11f"} Nov 27 18:15:34 crc kubenswrapper[4792]: I1127 18:15:34.455099 4792 generic.go:334] "Generic (PLEG): container finished" podID="b68d1963-13cb-479e-8946-2a1ffe534c42" containerID="b4f05c50e8970f39f4fb6dfabfe46173f75debc4c502c66643c467067e58172a" exitCode=0 Nov 27 18:15:34 crc kubenswrapper[4792]: I1127 18:15:34.455205 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khgp2" event={"ID":"b68d1963-13cb-479e-8946-2a1ffe534c42","Type":"ContainerDied","Data":"b4f05c50e8970f39f4fb6dfabfe46173f75debc4c502c66643c467067e58172a"} Nov 27 18:15:34 crc kubenswrapper[4792]: I1127 18:15:34.457713 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 18:15:37 crc kubenswrapper[4792]: I1127 18:15:37.494398 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khgp2" event={"ID":"b68d1963-13cb-479e-8946-2a1ffe534c42","Type":"ContainerStarted","Data":"d48790bb54b26726b8f41f5ff289e20803345bf07c36430cac3ab3db923a3719"} Nov 27 18:15:41 crc kubenswrapper[4792]: I1127 18:15:41.543225 4792 generic.go:334] "Generic (PLEG): container finished" podID="b68d1963-13cb-479e-8946-2a1ffe534c42" containerID="d48790bb54b26726b8f41f5ff289e20803345bf07c36430cac3ab3db923a3719" exitCode=0 Nov 27 18:15:41 crc kubenswrapper[4792]: I1127 18:15:41.543555 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khgp2" event={"ID":"b68d1963-13cb-479e-8946-2a1ffe534c42","Type":"ContainerDied","Data":"d48790bb54b26726b8f41f5ff289e20803345bf07c36430cac3ab3db923a3719"} Nov 27 18:15:42 crc kubenswrapper[4792]: I1127 18:15:42.555942 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khgp2" event={"ID":"b68d1963-13cb-479e-8946-2a1ffe534c42","Type":"ContainerStarted","Data":"dfeaaa0fbdd5979220d967dd9bc50d4cf3a80cc56cab35e716ba81a0a3a1a023"} Nov 27 18:15:42 crc kubenswrapper[4792]: I1127 18:15:42.583222 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-khgp2" podStartSLOduration=3.7560888820000002 podStartE2EDuration="11.583195982s" podCreationTimestamp="2025-11-27 18:15:31 +0000 UTC" firstStartedPulling="2025-11-27 18:15:34.457450566 +0000 UTC m=+3956.800276884" lastFinishedPulling="2025-11-27 18:15:42.284557666 +0000 UTC m=+3964.627383984" observedRunningTime="2025-11-27 18:15:42.570053605 +0000 UTC m=+3964.912879923" watchObservedRunningTime="2025-11-27 18:15:42.583195982 +0000 UTC m=+3964.926022320" Nov 27 18:15:52 crc kubenswrapper[4792]: I1127 18:15:52.250781 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:52 crc kubenswrapper[4792]: I1127 18:15:52.252416 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:52 crc kubenswrapper[4792]: I1127 18:15:52.311909 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:52 crc kubenswrapper[4792]: I1127 18:15:52.735664 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:52 crc kubenswrapper[4792]: I1127 18:15:52.785869 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-khgp2"] Nov 27 18:15:54 crc kubenswrapper[4792]: I1127 18:15:54.690932 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-khgp2" podUID="b68d1963-13cb-479e-8946-2a1ffe534c42" containerName="registry-server" containerID="cri-o://dfeaaa0fbdd5979220d967dd9bc50d4cf3a80cc56cab35e716ba81a0a3a1a023" gracePeriod=2 Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.428057 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.586233 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68d1963-13cb-479e-8946-2a1ffe534c42-utilities\") pod \"b68d1963-13cb-479e-8946-2a1ffe534c42\" (UID: \"b68d1963-13cb-479e-8946-2a1ffe534c42\") " Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.586589 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srjlk\" (UniqueName: \"kubernetes.io/projected/b68d1963-13cb-479e-8946-2a1ffe534c42-kube-api-access-srjlk\") pod \"b68d1963-13cb-479e-8946-2a1ffe534c42\" (UID: \"b68d1963-13cb-479e-8946-2a1ffe534c42\") " Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.586741 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68d1963-13cb-479e-8946-2a1ffe534c42-catalog-content\") pod \"b68d1963-13cb-479e-8946-2a1ffe534c42\" (UID: \"b68d1963-13cb-479e-8946-2a1ffe534c42\") " Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.587251 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b68d1963-13cb-479e-8946-2a1ffe534c42-utilities" (OuterVolumeSpecName: "utilities") pod "b68d1963-13cb-479e-8946-2a1ffe534c42" (UID: "b68d1963-13cb-479e-8946-2a1ffe534c42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.587571 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b68d1963-13cb-479e-8946-2a1ffe534c42-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.597289 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b68d1963-13cb-479e-8946-2a1ffe534c42-kube-api-access-srjlk" (OuterVolumeSpecName: "kube-api-access-srjlk") pod "b68d1963-13cb-479e-8946-2a1ffe534c42" (UID: "b68d1963-13cb-479e-8946-2a1ffe534c42"). InnerVolumeSpecName "kube-api-access-srjlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.643592 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b68d1963-13cb-479e-8946-2a1ffe534c42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b68d1963-13cb-479e-8946-2a1ffe534c42" (UID: "b68d1963-13cb-479e-8946-2a1ffe534c42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.690555 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b68d1963-13cb-479e-8946-2a1ffe534c42-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.690598 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srjlk\" (UniqueName: \"kubernetes.io/projected/b68d1963-13cb-479e-8946-2a1ffe534c42-kube-api-access-srjlk\") on node \"crc\" DevicePath \"\"" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.705576 4792 generic.go:334] "Generic (PLEG): container finished" podID="b68d1963-13cb-479e-8946-2a1ffe534c42" containerID="dfeaaa0fbdd5979220d967dd9bc50d4cf3a80cc56cab35e716ba81a0a3a1a023" exitCode=0 Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.705633 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khgp2" event={"ID":"b68d1963-13cb-479e-8946-2a1ffe534c42","Type":"ContainerDied","Data":"dfeaaa0fbdd5979220d967dd9bc50d4cf3a80cc56cab35e716ba81a0a3a1a023"} Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.705688 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khgp2" event={"ID":"b68d1963-13cb-479e-8946-2a1ffe534c42","Type":"ContainerDied","Data":"e9bce641fe4bc8cd5d1e6defb2a0d06238fcd93eb9f6152824dd6239bbbdc11f"} Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.705711 4792 scope.go:117] "RemoveContainer" containerID="dfeaaa0fbdd5979220d967dd9bc50d4cf3a80cc56cab35e716ba81a0a3a1a023" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.705886 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khgp2" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.755974 4792 scope.go:117] "RemoveContainer" containerID="d48790bb54b26726b8f41f5ff289e20803345bf07c36430cac3ab3db923a3719" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.786149 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-khgp2"] Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.806012 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-khgp2"] Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.807364 4792 scope.go:117] "RemoveContainer" containerID="b4f05c50e8970f39f4fb6dfabfe46173f75debc4c502c66643c467067e58172a" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.861682 4792 scope.go:117] "RemoveContainer" containerID="dfeaaa0fbdd5979220d967dd9bc50d4cf3a80cc56cab35e716ba81a0a3a1a023" Nov 27 18:15:55 crc kubenswrapper[4792]: E1127 18:15:55.862296 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfeaaa0fbdd5979220d967dd9bc50d4cf3a80cc56cab35e716ba81a0a3a1a023\": container with ID starting with dfeaaa0fbdd5979220d967dd9bc50d4cf3a80cc56cab35e716ba81a0a3a1a023 not found: ID does not exist" containerID="dfeaaa0fbdd5979220d967dd9bc50d4cf3a80cc56cab35e716ba81a0a3a1a023" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.862338 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfeaaa0fbdd5979220d967dd9bc50d4cf3a80cc56cab35e716ba81a0a3a1a023"} err="failed to get container status \"dfeaaa0fbdd5979220d967dd9bc50d4cf3a80cc56cab35e716ba81a0a3a1a023\": rpc error: code = NotFound desc = could not find container \"dfeaaa0fbdd5979220d967dd9bc50d4cf3a80cc56cab35e716ba81a0a3a1a023\": container with ID starting with dfeaaa0fbdd5979220d967dd9bc50d4cf3a80cc56cab35e716ba81a0a3a1a023 not found: ID does not exist" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.862364 4792 scope.go:117] "RemoveContainer" containerID="d48790bb54b26726b8f41f5ff289e20803345bf07c36430cac3ab3db923a3719" Nov 27 18:15:55 crc kubenswrapper[4792]: E1127 18:15:55.862801 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d48790bb54b26726b8f41f5ff289e20803345bf07c36430cac3ab3db923a3719\": container with ID starting with d48790bb54b26726b8f41f5ff289e20803345bf07c36430cac3ab3db923a3719 not found: ID does not exist" containerID="d48790bb54b26726b8f41f5ff289e20803345bf07c36430cac3ab3db923a3719" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.862856 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48790bb54b26726b8f41f5ff289e20803345bf07c36430cac3ab3db923a3719"} err="failed to get container status \"d48790bb54b26726b8f41f5ff289e20803345bf07c36430cac3ab3db923a3719\": rpc error: code = NotFound desc = could not find container \"d48790bb54b26726b8f41f5ff289e20803345bf07c36430cac3ab3db923a3719\": container with ID starting with d48790bb54b26726b8f41f5ff289e20803345bf07c36430cac3ab3db923a3719 not found: ID does not exist" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.862894 4792 scope.go:117] "RemoveContainer" containerID="b4f05c50e8970f39f4fb6dfabfe46173f75debc4c502c66643c467067e58172a" Nov 27 18:15:55 crc kubenswrapper[4792]: E1127 18:15:55.863320 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4f05c50e8970f39f4fb6dfabfe46173f75debc4c502c66643c467067e58172a\": container with ID starting with b4f05c50e8970f39f4fb6dfabfe46173f75debc4c502c66643c467067e58172a not found: ID does not exist" containerID="b4f05c50e8970f39f4fb6dfabfe46173f75debc4c502c66643c467067e58172a" Nov 27 18:15:55 crc kubenswrapper[4792]: I1127 18:15:55.863377 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f05c50e8970f39f4fb6dfabfe46173f75debc4c502c66643c467067e58172a"} err="failed to get container status \"b4f05c50e8970f39f4fb6dfabfe46173f75debc4c502c66643c467067e58172a\": rpc error: code = NotFound desc = could not find container \"b4f05c50e8970f39f4fb6dfabfe46173f75debc4c502c66643c467067e58172a\": container with ID starting with b4f05c50e8970f39f4fb6dfabfe46173f75debc4c502c66643c467067e58172a not found: ID does not exist" Nov 27 18:15:56 crc kubenswrapper[4792]: I1127 18:15:56.701282 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b68d1963-13cb-479e-8946-2a1ffe534c42" path="/var/lib/kubelet/pods/b68d1963-13cb-479e-8946-2a1ffe534c42/volumes" Nov 27 18:17:08 crc kubenswrapper[4792]: I1127 18:17:08.291619 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:17:08 crc kubenswrapper[4792]: I1127 18:17:08.292298 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:17:38 crc kubenswrapper[4792]: I1127 18:17:38.340540 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:17:38 crc kubenswrapper[4792]: I1127 18:17:38.341462 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:17:44 crc kubenswrapper[4792]: E1127 18:17:44.317224 4792 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.214:58418->38.102.83.214:33271: read tcp 38.102.83.214:58418->38.102.83.214:33271: read: connection reset by peer Nov 27 18:18:08 crc kubenswrapper[4792]: I1127 18:18:08.290831 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:18:08 crc kubenswrapper[4792]: I1127 18:18:08.291884 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:18:08 crc kubenswrapper[4792]: I1127 18:18:08.291953 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 18:18:08 crc kubenswrapper[4792]: I1127 18:18:08.293499 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0374505357be39e7aa0b55631a44de90e826c04f3723252e953501ac8661747f"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 18:18:08 crc kubenswrapper[4792]: I1127 18:18:08.293559 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://0374505357be39e7aa0b55631a44de90e826c04f3723252e953501ac8661747f" gracePeriod=600 Nov 27 18:18:09 crc kubenswrapper[4792]: I1127 18:18:09.216084 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="0374505357be39e7aa0b55631a44de90e826c04f3723252e953501ac8661747f" exitCode=0 Nov 27 18:18:09 crc kubenswrapper[4792]: I1127 18:18:09.216147 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"0374505357be39e7aa0b55631a44de90e826c04f3723252e953501ac8661747f"} Nov 27 18:18:09 crc kubenswrapper[4792]: I1127 18:18:09.216491 4792 scope.go:117] "RemoveContainer" containerID="36980b546cb1b508da30ac1709b490a1e2e8b5435f17274e2d33d96b5e6af5af" Nov 27 18:18:10 crc kubenswrapper[4792]: I1127 18:18:10.229538 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807"} Nov 27 18:19:46 crc kubenswrapper[4792]: I1127 18:19:46.766666 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hbrpj"] Nov 27 18:19:46 crc kubenswrapper[4792]: E1127 18:19:46.767741 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b68d1963-13cb-479e-8946-2a1ffe534c42" containerName="extract-utilities" Nov 27 18:19:46 crc kubenswrapper[4792]: I1127 18:19:46.767763 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68d1963-13cb-479e-8946-2a1ffe534c42" containerName="extract-utilities" Nov 27 18:19:46 crc kubenswrapper[4792]: E1127 18:19:46.767787 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b68d1963-13cb-479e-8946-2a1ffe534c42" containerName="registry-server" Nov 27 18:19:46 crc kubenswrapper[4792]: I1127 18:19:46.767793 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68d1963-13cb-479e-8946-2a1ffe534c42" containerName="registry-server" Nov 27 18:19:46 crc kubenswrapper[4792]: E1127 18:19:46.767803 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b68d1963-13cb-479e-8946-2a1ffe534c42" containerName="extract-content" Nov 27 18:19:46 crc kubenswrapper[4792]: I1127 18:19:46.767808 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b68d1963-13cb-479e-8946-2a1ffe534c42" containerName="extract-content" Nov 27 18:19:46 crc kubenswrapper[4792]: I1127 18:19:46.768067 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b68d1963-13cb-479e-8946-2a1ffe534c42" containerName="registry-server" Nov 27 18:19:46 crc kubenswrapper[4792]: I1127 18:19:46.769838 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:19:46 crc kubenswrapper[4792]: I1127 18:19:46.783983 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hbrpj"] Nov 27 18:19:46 crc kubenswrapper[4792]: I1127 18:19:46.883557 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-utilities\") pod \"redhat-operators-hbrpj\" (UID: \"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533\") " pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:19:46 crc kubenswrapper[4792]: I1127 18:19:46.884247 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggpsv\" (UniqueName: \"kubernetes.io/projected/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-kube-api-access-ggpsv\") pod \"redhat-operators-hbrpj\" (UID: \"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533\") " pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:19:46 crc kubenswrapper[4792]: I1127 18:19:46.884453 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-catalog-content\") pod \"redhat-operators-hbrpj\" (UID: \"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533\") " pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:19:46 crc kubenswrapper[4792]: I1127 18:19:46.987629 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggpsv\" (UniqueName: \"kubernetes.io/projected/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-kube-api-access-ggpsv\") pod \"redhat-operators-hbrpj\" (UID: \"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533\") " pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:19:46 crc kubenswrapper[4792]: I1127 18:19:46.987753 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-catalog-content\") pod \"redhat-operators-hbrpj\" (UID: \"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533\") " pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:19:46 crc kubenswrapper[4792]: I1127 18:19:46.987874 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-utilities\") pod \"redhat-operators-hbrpj\" (UID: \"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533\") " pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:19:46 crc kubenswrapper[4792]: I1127 18:19:46.988307 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-catalog-content\") pod \"redhat-operators-hbrpj\" (UID: \"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533\") " pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:19:46 crc kubenswrapper[4792]: I1127 18:19:46.988406 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-utilities\") pod \"redhat-operators-hbrpj\" (UID: \"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533\") " pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:19:47 crc kubenswrapper[4792]: I1127 18:19:47.015706 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggpsv\" (UniqueName: \"kubernetes.io/projected/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-kube-api-access-ggpsv\") pod \"redhat-operators-hbrpj\" (UID: \"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533\") " pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:19:47 crc kubenswrapper[4792]: I1127 18:19:47.098697 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:19:47 crc kubenswrapper[4792]: I1127 18:19:47.725866 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hbrpj"] Nov 27 18:19:48 crc kubenswrapper[4792]: E1127 18:19:48.104736 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf426f7ff_6ab7_4781_bde1_1c1a9f6d1533.slice/crio-156831050cae19cebe19d4de321cf898e08f20ab838239934fdc6d4b28870276.scope\": RecentStats: unable to find data in memory cache]" Nov 27 18:19:48 crc kubenswrapper[4792]: I1127 18:19:48.370481 4792 generic.go:334] "Generic (PLEG): container finished" podID="f426f7ff-6ab7-4781-bde1-1c1a9f6d1533" containerID="156831050cae19cebe19d4de321cf898e08f20ab838239934fdc6d4b28870276" exitCode=0 Nov 27 18:19:48 crc kubenswrapper[4792]: I1127 18:19:48.370592 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbrpj" event={"ID":"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533","Type":"ContainerDied","Data":"156831050cae19cebe19d4de321cf898e08f20ab838239934fdc6d4b28870276"} Nov 27 18:19:48 crc kubenswrapper[4792]: I1127 18:19:48.370804 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbrpj" event={"ID":"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533","Type":"ContainerStarted","Data":"ca16489711e88e2deae3ac8c41b4a13b7d15b9b05fcb547d15c21edea421cab0"} Nov 27 18:19:50 crc kubenswrapper[4792]: I1127 18:19:50.395835 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbrpj" event={"ID":"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533","Type":"ContainerStarted","Data":"a46f2e3072225c1720fd7a65911851c8c159855653a31f674d7420fbbe367a42"} Nov 27 18:19:57 crc kubenswrapper[4792]: I1127 18:19:57.482323 4792 generic.go:334] "Generic (PLEG): container finished" podID="f426f7ff-6ab7-4781-bde1-1c1a9f6d1533" containerID="a46f2e3072225c1720fd7a65911851c8c159855653a31f674d7420fbbe367a42" exitCode=0 Nov 27 18:19:57 crc kubenswrapper[4792]: I1127 18:19:57.482418 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbrpj" event={"ID":"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533","Type":"ContainerDied","Data":"a46f2e3072225c1720fd7a65911851c8c159855653a31f674d7420fbbe367a42"} Nov 27 18:19:58 crc kubenswrapper[4792]: I1127 18:19:58.495273 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbrpj" event={"ID":"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533","Type":"ContainerStarted","Data":"7538592f932f611b250b7f62c019b1cf07440f807f2fa7c0b81d0e0f8464616a"} Nov 27 18:19:58 crc kubenswrapper[4792]: I1127 18:19:58.526388 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hbrpj" podStartSLOduration=2.96477509 podStartE2EDuration="12.52636716s" podCreationTimestamp="2025-11-27 18:19:46 +0000 UTC" firstStartedPulling="2025-11-27 18:19:48.375676352 +0000 UTC m=+4210.718502670" lastFinishedPulling="2025-11-27 18:19:57.937268422 +0000 UTC m=+4220.280094740" observedRunningTime="2025-11-27 18:19:58.516084945 +0000 UTC m=+4220.858911263" watchObservedRunningTime="2025-11-27 18:19:58.52636716 +0000 UTC m=+4220.869193478" Nov 27 18:20:07 crc kubenswrapper[4792]: I1127 18:20:07.099880 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:20:07 crc kubenswrapper[4792]: I1127 18:20:07.100561 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:20:07 crc kubenswrapper[4792]: I1127 18:20:07.162435 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:20:07 crc kubenswrapper[4792]: I1127 18:20:07.659570 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:20:07 crc kubenswrapper[4792]: I1127 18:20:07.717860 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hbrpj"] Nov 27 18:20:09 crc kubenswrapper[4792]: I1127 18:20:09.621273 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hbrpj" podUID="f426f7ff-6ab7-4781-bde1-1c1a9f6d1533" containerName="registry-server" containerID="cri-o://7538592f932f611b250b7f62c019b1cf07440f807f2fa7c0b81d0e0f8464616a" gracePeriod=2 Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.309327 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.472965 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-catalog-content\") pod \"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533\" (UID: \"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533\") " Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.473231 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggpsv\" (UniqueName: \"kubernetes.io/projected/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-kube-api-access-ggpsv\") pod \"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533\" (UID: \"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533\") " Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.473471 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-utilities\") pod \"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533\" (UID: \"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533\") " Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.475248 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-utilities" (OuterVolumeSpecName: "utilities") pod "f426f7ff-6ab7-4781-bde1-1c1a9f6d1533" (UID: "f426f7ff-6ab7-4781-bde1-1c1a9f6d1533"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.487764 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-kube-api-access-ggpsv" (OuterVolumeSpecName: "kube-api-access-ggpsv") pod "f426f7ff-6ab7-4781-bde1-1c1a9f6d1533" (UID: "f426f7ff-6ab7-4781-bde1-1c1a9f6d1533"). InnerVolumeSpecName "kube-api-access-ggpsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.576823 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggpsv\" (UniqueName: \"kubernetes.io/projected/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-kube-api-access-ggpsv\") on node \"crc\" DevicePath \"\"" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.576860 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.591858 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f426f7ff-6ab7-4781-bde1-1c1a9f6d1533" (UID: "f426f7ff-6ab7-4781-bde1-1c1a9f6d1533"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.639768 4792 generic.go:334] "Generic (PLEG): container finished" podID="f426f7ff-6ab7-4781-bde1-1c1a9f6d1533" containerID="7538592f932f611b250b7f62c019b1cf07440f807f2fa7c0b81d0e0f8464616a" exitCode=0 Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.639826 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbrpj" event={"ID":"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533","Type":"ContainerDied","Data":"7538592f932f611b250b7f62c019b1cf07440f807f2fa7c0b81d0e0f8464616a"} Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.639859 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hbrpj" event={"ID":"f426f7ff-6ab7-4781-bde1-1c1a9f6d1533","Type":"ContainerDied","Data":"ca16489711e88e2deae3ac8c41b4a13b7d15b9b05fcb547d15c21edea421cab0"} Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.639896 4792 scope.go:117] "RemoveContainer" containerID="7538592f932f611b250b7f62c019b1cf07440f807f2fa7c0b81d0e0f8464616a" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.640199 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hbrpj" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.672125 4792 scope.go:117] "RemoveContainer" containerID="a46f2e3072225c1720fd7a65911851c8c159855653a31f674d7420fbbe367a42" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.683743 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.707285 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hbrpj"] Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.709917 4792 scope.go:117] "RemoveContainer" containerID="156831050cae19cebe19d4de321cf898e08f20ab838239934fdc6d4b28870276" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.710355 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hbrpj"] Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.763143 4792 scope.go:117] "RemoveContainer" containerID="7538592f932f611b250b7f62c019b1cf07440f807f2fa7c0b81d0e0f8464616a" Nov 27 18:20:10 crc kubenswrapper[4792]: E1127 18:20:10.763796 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7538592f932f611b250b7f62c019b1cf07440f807f2fa7c0b81d0e0f8464616a\": container with ID starting with 7538592f932f611b250b7f62c019b1cf07440f807f2fa7c0b81d0e0f8464616a not found: ID does not exist" containerID="7538592f932f611b250b7f62c019b1cf07440f807f2fa7c0b81d0e0f8464616a" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.763841 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7538592f932f611b250b7f62c019b1cf07440f807f2fa7c0b81d0e0f8464616a"} err="failed to get container status \"7538592f932f611b250b7f62c019b1cf07440f807f2fa7c0b81d0e0f8464616a\": rpc error: code = NotFound desc = could not find container \"7538592f932f611b250b7f62c019b1cf07440f807f2fa7c0b81d0e0f8464616a\": container with ID starting with 7538592f932f611b250b7f62c019b1cf07440f807f2fa7c0b81d0e0f8464616a not found: ID does not exist" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.763870 4792 scope.go:117] "RemoveContainer" containerID="a46f2e3072225c1720fd7a65911851c8c159855653a31f674d7420fbbe367a42" Nov 27 18:20:10 crc kubenswrapper[4792]: E1127 18:20:10.764235 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a46f2e3072225c1720fd7a65911851c8c159855653a31f674d7420fbbe367a42\": container with ID starting with a46f2e3072225c1720fd7a65911851c8c159855653a31f674d7420fbbe367a42 not found: ID does not exist" containerID="a46f2e3072225c1720fd7a65911851c8c159855653a31f674d7420fbbe367a42" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.764264 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a46f2e3072225c1720fd7a65911851c8c159855653a31f674d7420fbbe367a42"} err="failed to get container status \"a46f2e3072225c1720fd7a65911851c8c159855653a31f674d7420fbbe367a42\": rpc error: code = NotFound desc = could not find container \"a46f2e3072225c1720fd7a65911851c8c159855653a31f674d7420fbbe367a42\": container with ID starting with a46f2e3072225c1720fd7a65911851c8c159855653a31f674d7420fbbe367a42 not found: ID does not exist" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.764284 4792 scope.go:117] "RemoveContainer" containerID="156831050cae19cebe19d4de321cf898e08f20ab838239934fdc6d4b28870276" Nov 27 18:20:10 crc kubenswrapper[4792]: E1127 18:20:10.764673 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"156831050cae19cebe19d4de321cf898e08f20ab838239934fdc6d4b28870276\": container with ID starting with 156831050cae19cebe19d4de321cf898e08f20ab838239934fdc6d4b28870276 not found: ID does not exist" containerID="156831050cae19cebe19d4de321cf898e08f20ab838239934fdc6d4b28870276" Nov 27 18:20:10 crc kubenswrapper[4792]: I1127 18:20:10.764702 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"156831050cae19cebe19d4de321cf898e08f20ab838239934fdc6d4b28870276"} err="failed to get container status \"156831050cae19cebe19d4de321cf898e08f20ab838239934fdc6d4b28870276\": rpc error: code = NotFound desc = could not find container \"156831050cae19cebe19d4de321cf898e08f20ab838239934fdc6d4b28870276\": container with ID starting with 156831050cae19cebe19d4de321cf898e08f20ab838239934fdc6d4b28870276 not found: ID does not exist" Nov 27 18:20:12 crc kubenswrapper[4792]: I1127 18:20:12.702152 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f426f7ff-6ab7-4781-bde1-1c1a9f6d1533" path="/var/lib/kubelet/pods/f426f7ff-6ab7-4781-bde1-1c1a9f6d1533/volumes" Nov 27 18:20:30 crc kubenswrapper[4792]: I1127 18:20:30.957341 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jcrdp"] Nov 27 18:20:30 crc kubenswrapper[4792]: E1127 18:20:30.959718 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f426f7ff-6ab7-4781-bde1-1c1a9f6d1533" containerName="extract-utilities" Nov 27 18:20:30 crc kubenswrapper[4792]: I1127 18:20:30.959836 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f426f7ff-6ab7-4781-bde1-1c1a9f6d1533" containerName="extract-utilities" Nov 27 18:20:30 crc kubenswrapper[4792]: E1127 18:20:30.959899 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f426f7ff-6ab7-4781-bde1-1c1a9f6d1533" containerName="registry-server" Nov 27 18:20:30 crc kubenswrapper[4792]: I1127 18:20:30.960034 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f426f7ff-6ab7-4781-bde1-1c1a9f6d1533" containerName="registry-server" Nov 27 18:20:30 crc kubenswrapper[4792]: E1127 18:20:30.960126 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f426f7ff-6ab7-4781-bde1-1c1a9f6d1533" containerName="extract-content" Nov 27 18:20:30 crc kubenswrapper[4792]: I1127 18:20:30.960208 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f426f7ff-6ab7-4781-bde1-1c1a9f6d1533" containerName="extract-content" Nov 27 18:20:30 crc kubenswrapper[4792]: I1127 18:20:30.960577 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f426f7ff-6ab7-4781-bde1-1c1a9f6d1533" containerName="registry-server" Nov 27 18:20:30 crc kubenswrapper[4792]: I1127 18:20:30.963072 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:30 crc kubenswrapper[4792]: I1127 18:20:30.972403 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcrdp"] Nov 27 18:20:31 crc kubenswrapper[4792]: I1127 18:20:31.039927 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32eadec3-0815-4902-a273-a1211962fb70-catalog-content\") pod \"redhat-marketplace-jcrdp\" (UID: \"32eadec3-0815-4902-a273-a1211962fb70\") " pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:31 crc kubenswrapper[4792]: I1127 18:20:31.040286 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32eadec3-0815-4902-a273-a1211962fb70-utilities\") pod \"redhat-marketplace-jcrdp\" (UID: \"32eadec3-0815-4902-a273-a1211962fb70\") " pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:31 crc kubenswrapper[4792]: I1127 18:20:31.040429 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6489\" (UniqueName: \"kubernetes.io/projected/32eadec3-0815-4902-a273-a1211962fb70-kube-api-access-x6489\") pod \"redhat-marketplace-jcrdp\" (UID: \"32eadec3-0815-4902-a273-a1211962fb70\") " pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:31 crc kubenswrapper[4792]: I1127 18:20:31.142875 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32eadec3-0815-4902-a273-a1211962fb70-catalog-content\") pod \"redhat-marketplace-jcrdp\" (UID: \"32eadec3-0815-4902-a273-a1211962fb70\") " pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:31 crc kubenswrapper[4792]: I1127 18:20:31.142991 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32eadec3-0815-4902-a273-a1211962fb70-utilities\") pod \"redhat-marketplace-jcrdp\" (UID: \"32eadec3-0815-4902-a273-a1211962fb70\") " pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:31 crc kubenswrapper[4792]: I1127 18:20:31.143040 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6489\" (UniqueName: \"kubernetes.io/projected/32eadec3-0815-4902-a273-a1211962fb70-kube-api-access-x6489\") pod \"redhat-marketplace-jcrdp\" (UID: \"32eadec3-0815-4902-a273-a1211962fb70\") " pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:31 crc kubenswrapper[4792]: I1127 18:20:31.143802 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32eadec3-0815-4902-a273-a1211962fb70-catalog-content\") pod \"redhat-marketplace-jcrdp\" (UID: \"32eadec3-0815-4902-a273-a1211962fb70\") " pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:31 crc kubenswrapper[4792]: I1127 18:20:31.143851 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32eadec3-0815-4902-a273-a1211962fb70-utilities\") pod \"redhat-marketplace-jcrdp\" (UID: \"32eadec3-0815-4902-a273-a1211962fb70\") " pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:31 crc kubenswrapper[4792]: I1127 18:20:31.164388 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6489\" (UniqueName: \"kubernetes.io/projected/32eadec3-0815-4902-a273-a1211962fb70-kube-api-access-x6489\") pod \"redhat-marketplace-jcrdp\" (UID: \"32eadec3-0815-4902-a273-a1211962fb70\") " pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:31 crc kubenswrapper[4792]: I1127 18:20:31.291817 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:31 crc kubenswrapper[4792]: I1127 18:20:31.834833 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcrdp"] Nov 27 18:20:31 crc kubenswrapper[4792]: I1127 18:20:31.875529 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcrdp" event={"ID":"32eadec3-0815-4902-a273-a1211962fb70","Type":"ContainerStarted","Data":"e51cba0699ffae021eef51a4c87d2afcb1a30c79ff6f44f8c198f8c0d4587ad4"} Nov 27 18:20:32 crc kubenswrapper[4792]: I1127 18:20:32.892344 4792 generic.go:334] "Generic (PLEG): container finished" podID="32eadec3-0815-4902-a273-a1211962fb70" containerID="f64d3e65b7ca72aba6f86001d7a36c1f8dc270566dbd397e2bfcb29d921ee40d" exitCode=0 Nov 27 18:20:32 crc kubenswrapper[4792]: I1127 18:20:32.892608 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcrdp" event={"ID":"32eadec3-0815-4902-a273-a1211962fb70","Type":"ContainerDied","Data":"f64d3e65b7ca72aba6f86001d7a36c1f8dc270566dbd397e2bfcb29d921ee40d"} Nov 27 18:20:34 crc kubenswrapper[4792]: I1127 18:20:34.918178 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcrdp" event={"ID":"32eadec3-0815-4902-a273-a1211962fb70","Type":"ContainerStarted","Data":"f6dec7bee5df488c43aabdb96b6f6944f2206a71a4f03b917b612432f4043aef"} Nov 27 18:20:35 crc kubenswrapper[4792]: I1127 18:20:35.930996 4792 generic.go:334] "Generic (PLEG): container finished" podID="32eadec3-0815-4902-a273-a1211962fb70" containerID="f6dec7bee5df488c43aabdb96b6f6944f2206a71a4f03b917b612432f4043aef" exitCode=0 Nov 27 18:20:35 crc kubenswrapper[4792]: I1127 18:20:35.931073 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcrdp" event={"ID":"32eadec3-0815-4902-a273-a1211962fb70","Type":"ContainerDied","Data":"f6dec7bee5df488c43aabdb96b6f6944f2206a71a4f03b917b612432f4043aef"} Nov 27 18:20:35 crc kubenswrapper[4792]: I1127 18:20:35.934610 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 18:20:37 crc kubenswrapper[4792]: I1127 18:20:37.964103 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcrdp" event={"ID":"32eadec3-0815-4902-a273-a1211962fb70","Type":"ContainerStarted","Data":"911beed8e605dfd38b9268014ea1fd7f686f83f7c88632dfd14ba03c1bdd1067"} Nov 27 18:20:38 crc kubenswrapper[4792]: I1127 18:20:38.006825 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jcrdp" podStartSLOduration=4.17570581 podStartE2EDuration="8.006799784s" podCreationTimestamp="2025-11-27 18:20:30 +0000 UTC" firstStartedPulling="2025-11-27 18:20:32.897910646 +0000 UTC m=+4255.240736964" lastFinishedPulling="2025-11-27 18:20:36.72900462 +0000 UTC m=+4259.071830938" observedRunningTime="2025-11-27 18:20:37.983857643 +0000 UTC m=+4260.326683961" watchObservedRunningTime="2025-11-27 18:20:38.006799784 +0000 UTC m=+4260.349626102" Nov 27 18:20:38 crc kubenswrapper[4792]: I1127 18:20:38.290107 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:20:38 crc kubenswrapper[4792]: I1127 18:20:38.290410 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:20:41 crc kubenswrapper[4792]: I1127 18:20:41.292816 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:41 crc kubenswrapper[4792]: I1127 18:20:41.293259 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:41 crc kubenswrapper[4792]: I1127 18:20:41.345216 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:42 crc kubenswrapper[4792]: I1127 18:20:42.081395 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:42 crc kubenswrapper[4792]: I1127 18:20:42.142857 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcrdp"] Nov 27 18:20:44 crc kubenswrapper[4792]: I1127 18:20:44.037678 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jcrdp" podUID="32eadec3-0815-4902-a273-a1211962fb70" containerName="registry-server" containerID="cri-o://911beed8e605dfd38b9268014ea1fd7f686f83f7c88632dfd14ba03c1bdd1067" gracePeriod=2 Nov 27 18:20:44 crc kubenswrapper[4792]: I1127 18:20:44.558687 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:44 crc kubenswrapper[4792]: I1127 18:20:44.722287 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32eadec3-0815-4902-a273-a1211962fb70-catalog-content\") pod \"32eadec3-0815-4902-a273-a1211962fb70\" (UID: \"32eadec3-0815-4902-a273-a1211962fb70\") " Nov 27 18:20:44 crc kubenswrapper[4792]: I1127 18:20:44.722537 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6489\" (UniqueName: \"kubernetes.io/projected/32eadec3-0815-4902-a273-a1211962fb70-kube-api-access-x6489\") pod \"32eadec3-0815-4902-a273-a1211962fb70\" (UID: \"32eadec3-0815-4902-a273-a1211962fb70\") " Nov 27 18:20:44 crc kubenswrapper[4792]: I1127 18:20:44.722610 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32eadec3-0815-4902-a273-a1211962fb70-utilities\") pod \"32eadec3-0815-4902-a273-a1211962fb70\" (UID: \"32eadec3-0815-4902-a273-a1211962fb70\") " Nov 27 18:20:44 crc kubenswrapper[4792]: I1127 18:20:44.724088 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32eadec3-0815-4902-a273-a1211962fb70-utilities" (OuterVolumeSpecName: "utilities") pod "32eadec3-0815-4902-a273-a1211962fb70" (UID: "32eadec3-0815-4902-a273-a1211962fb70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:20:44 crc kubenswrapper[4792]: I1127 18:20:44.758211 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32eadec3-0815-4902-a273-a1211962fb70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32eadec3-0815-4902-a273-a1211962fb70" (UID: "32eadec3-0815-4902-a273-a1211962fb70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:20:44 crc kubenswrapper[4792]: I1127 18:20:44.761713 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32eadec3-0815-4902-a273-a1211962fb70-kube-api-access-x6489" (OuterVolumeSpecName: "kube-api-access-x6489") pod "32eadec3-0815-4902-a273-a1211962fb70" (UID: "32eadec3-0815-4902-a273-a1211962fb70"). InnerVolumeSpecName "kube-api-access-x6489". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:20:44 crc kubenswrapper[4792]: I1127 18:20:44.826041 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32eadec3-0815-4902-a273-a1211962fb70-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:20:44 crc kubenswrapper[4792]: I1127 18:20:44.826082 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32eadec3-0815-4902-a273-a1211962fb70-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:20:44 crc kubenswrapper[4792]: I1127 18:20:44.826096 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6489\" (UniqueName: \"kubernetes.io/projected/32eadec3-0815-4902-a273-a1211962fb70-kube-api-access-x6489\") on node \"crc\" DevicePath \"\"" Nov 27 18:20:45 crc kubenswrapper[4792]: I1127 18:20:45.049356 4792 generic.go:334] "Generic (PLEG): container finished" podID="32eadec3-0815-4902-a273-a1211962fb70" containerID="911beed8e605dfd38b9268014ea1fd7f686f83f7c88632dfd14ba03c1bdd1067" exitCode=0 Nov 27 18:20:45 crc kubenswrapper[4792]: I1127 18:20:45.049409 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcrdp" event={"ID":"32eadec3-0815-4902-a273-a1211962fb70","Type":"ContainerDied","Data":"911beed8e605dfd38b9268014ea1fd7f686f83f7c88632dfd14ba03c1bdd1067"} Nov 27 18:20:45 crc kubenswrapper[4792]: I1127 18:20:45.049444 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcrdp" event={"ID":"32eadec3-0815-4902-a273-a1211962fb70","Type":"ContainerDied","Data":"e51cba0699ffae021eef51a4c87d2afcb1a30c79ff6f44f8c198f8c0d4587ad4"} Nov 27 18:20:45 crc kubenswrapper[4792]: I1127 18:20:45.049462 4792 scope.go:117] "RemoveContainer" containerID="911beed8e605dfd38b9268014ea1fd7f686f83f7c88632dfd14ba03c1bdd1067" Nov 27 18:20:45 crc kubenswrapper[4792]: I1127 18:20:45.049592 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcrdp" Nov 27 18:20:45 crc kubenswrapper[4792]: I1127 18:20:45.090144 4792 scope.go:117] "RemoveContainer" containerID="f6dec7bee5df488c43aabdb96b6f6944f2206a71a4f03b917b612432f4043aef" Nov 27 18:20:45 crc kubenswrapper[4792]: I1127 18:20:45.100007 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcrdp"] Nov 27 18:20:45 crc kubenswrapper[4792]: I1127 18:20:45.112965 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcrdp"] Nov 27 18:20:45 crc kubenswrapper[4792]: I1127 18:20:45.122259 4792 scope.go:117] "RemoveContainer" containerID="f64d3e65b7ca72aba6f86001d7a36c1f8dc270566dbd397e2bfcb29d921ee40d" Nov 27 18:20:45 crc kubenswrapper[4792]: I1127 18:20:45.171705 4792 scope.go:117] "RemoveContainer" containerID="911beed8e605dfd38b9268014ea1fd7f686f83f7c88632dfd14ba03c1bdd1067" Nov 27 18:20:45 crc kubenswrapper[4792]: E1127 18:20:45.172260 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"911beed8e605dfd38b9268014ea1fd7f686f83f7c88632dfd14ba03c1bdd1067\": container with ID starting with 911beed8e605dfd38b9268014ea1fd7f686f83f7c88632dfd14ba03c1bdd1067 not found: ID does not exist" containerID="911beed8e605dfd38b9268014ea1fd7f686f83f7c88632dfd14ba03c1bdd1067" Nov 27 18:20:45 crc kubenswrapper[4792]: I1127 18:20:45.172295 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"911beed8e605dfd38b9268014ea1fd7f686f83f7c88632dfd14ba03c1bdd1067"} err="failed to get container status \"911beed8e605dfd38b9268014ea1fd7f686f83f7c88632dfd14ba03c1bdd1067\": rpc error: code = NotFound desc = could not find container \"911beed8e605dfd38b9268014ea1fd7f686f83f7c88632dfd14ba03c1bdd1067\": container with ID starting with 911beed8e605dfd38b9268014ea1fd7f686f83f7c88632dfd14ba03c1bdd1067 not found: ID does not exist" Nov 27 18:20:45 crc kubenswrapper[4792]: I1127 18:20:45.172320 4792 scope.go:117] "RemoveContainer" containerID="f6dec7bee5df488c43aabdb96b6f6944f2206a71a4f03b917b612432f4043aef" Nov 27 18:20:45 crc kubenswrapper[4792]: E1127 18:20:45.173014 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6dec7bee5df488c43aabdb96b6f6944f2206a71a4f03b917b612432f4043aef\": container with ID starting with f6dec7bee5df488c43aabdb96b6f6944f2206a71a4f03b917b612432f4043aef not found: ID does not exist" containerID="f6dec7bee5df488c43aabdb96b6f6944f2206a71a4f03b917b612432f4043aef" Nov 27 18:20:45 crc kubenswrapper[4792]: I1127 18:20:45.173069 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6dec7bee5df488c43aabdb96b6f6944f2206a71a4f03b917b612432f4043aef"} err="failed to get container status \"f6dec7bee5df488c43aabdb96b6f6944f2206a71a4f03b917b612432f4043aef\": rpc error: code = NotFound desc = could not find container \"f6dec7bee5df488c43aabdb96b6f6944f2206a71a4f03b917b612432f4043aef\": container with ID starting with f6dec7bee5df488c43aabdb96b6f6944f2206a71a4f03b917b612432f4043aef not found: ID does not exist" Nov 27 18:20:45 crc kubenswrapper[4792]: I1127 18:20:45.173103 4792 scope.go:117] "RemoveContainer" containerID="f64d3e65b7ca72aba6f86001d7a36c1f8dc270566dbd397e2bfcb29d921ee40d" Nov 27 18:20:45 crc kubenswrapper[4792]: E1127 18:20:45.173602 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f64d3e65b7ca72aba6f86001d7a36c1f8dc270566dbd397e2bfcb29d921ee40d\": container with ID starting with f64d3e65b7ca72aba6f86001d7a36c1f8dc270566dbd397e2bfcb29d921ee40d not found: ID does not exist" containerID="f64d3e65b7ca72aba6f86001d7a36c1f8dc270566dbd397e2bfcb29d921ee40d" Nov 27 18:20:45 crc kubenswrapper[4792]: I1127 18:20:45.173634 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64d3e65b7ca72aba6f86001d7a36c1f8dc270566dbd397e2bfcb29d921ee40d"} err="failed to get container status \"f64d3e65b7ca72aba6f86001d7a36c1f8dc270566dbd397e2bfcb29d921ee40d\": rpc error: code = NotFound desc = could not find container \"f64d3e65b7ca72aba6f86001d7a36c1f8dc270566dbd397e2bfcb29d921ee40d\": container with ID starting with f64d3e65b7ca72aba6f86001d7a36c1f8dc270566dbd397e2bfcb29d921ee40d not found: ID does not exist" Nov 27 18:20:46 crc kubenswrapper[4792]: I1127 18:20:46.713463 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32eadec3-0815-4902-a273-a1211962fb70" path="/var/lib/kubelet/pods/32eadec3-0815-4902-a273-a1211962fb70/volumes" Nov 27 18:20:46 crc kubenswrapper[4792]: E1127 18:20:46.909142 4792 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.214:47896->38.102.83.214:33271: write tcp 38.102.83.214:47896->38.102.83.214:33271: write: broken pipe Nov 27 18:21:08 crc kubenswrapper[4792]: I1127 18:21:08.290835 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:21:08 crc kubenswrapper[4792]: I1127 18:21:08.291471 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:21:38 crc kubenswrapper[4792]: I1127 18:21:38.290515 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:21:38 crc kubenswrapper[4792]: I1127 18:21:38.290997 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:21:38 crc kubenswrapper[4792]: I1127 18:21:38.291041 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 18:21:38 crc kubenswrapper[4792]: I1127 18:21:38.291926 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 18:21:38 crc kubenswrapper[4792]: I1127 18:21:38.291970 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" gracePeriod=600 Nov 27 18:21:38 crc kubenswrapper[4792]: E1127 18:21:38.431831 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:21:38 crc kubenswrapper[4792]: I1127 18:21:38.623889 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" exitCode=0 Nov 27 18:21:38 crc kubenswrapper[4792]: I1127 18:21:38.623941 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807"} Nov 27 18:21:38 crc kubenswrapper[4792]: I1127 18:21:38.623983 4792 scope.go:117] "RemoveContainer" containerID="0374505357be39e7aa0b55631a44de90e826c04f3723252e953501ac8661747f" Nov 27 18:21:38 crc kubenswrapper[4792]: I1127 18:21:38.625010 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:21:38 crc kubenswrapper[4792]: E1127 18:21:38.625488 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:21:51 crc kubenswrapper[4792]: I1127 18:21:51.686582 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:21:51 crc kubenswrapper[4792]: E1127 18:21:51.687227 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:22:03 crc kubenswrapper[4792]: I1127 18:22:03.687489 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:22:03 crc kubenswrapper[4792]: E1127 18:22:03.688373 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:22:08 crc kubenswrapper[4792]: E1127 18:22:08.994125 4792 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.214:54964->38.102.83.214:33271: write tcp 38.102.83.214:54964->38.102.83.214:33271: write: broken pipe Nov 27 18:22:17 crc kubenswrapper[4792]: I1127 18:22:17.686789 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:22:17 crc kubenswrapper[4792]: E1127 18:22:17.687520 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:22:20 crc kubenswrapper[4792]: E1127 18:22:20.476717 4792 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.214:58320->38.102.83.214:33271: write tcp 38.102.83.214:58320->38.102.83.214:33271: write: broken pipe Nov 27 18:22:28 crc kubenswrapper[4792]: I1127 18:22:28.695192 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:22:28 crc kubenswrapper[4792]: E1127 18:22:28.696546 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:22:40 crc kubenswrapper[4792]: I1127 18:22:40.687483 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:22:40 crc kubenswrapper[4792]: E1127 18:22:40.688434 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:22:55 crc kubenswrapper[4792]: I1127 18:22:55.686844 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:22:55 crc kubenswrapper[4792]: E1127 18:22:55.687670 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:23:08 crc kubenswrapper[4792]: I1127 18:23:08.694469 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:23:08 crc kubenswrapper[4792]: E1127 18:23:08.696516 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:23:22 crc kubenswrapper[4792]: I1127 18:23:22.687754 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:23:22 crc kubenswrapper[4792]: E1127 18:23:22.688754 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:23:33 crc kubenswrapper[4792]: I1127 18:23:33.687272 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:23:33 crc kubenswrapper[4792]: E1127 18:23:33.702930 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:23:48 crc kubenswrapper[4792]: I1127 18:23:48.695830 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:23:48 crc kubenswrapper[4792]: E1127 18:23:48.696844 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:24:01 crc kubenswrapper[4792]: I1127 18:24:01.686825 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:24:01 crc kubenswrapper[4792]: E1127 18:24:01.687890 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:24:15 crc kubenswrapper[4792]: I1127 18:24:15.686923 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:24:15 crc kubenswrapper[4792]: E1127 18:24:15.687629 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.055460 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bcbg5"] Nov 27 18:24:20 crc kubenswrapper[4792]: E1127 18:24:20.056403 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32eadec3-0815-4902-a273-a1211962fb70" containerName="extract-content" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.056418 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="32eadec3-0815-4902-a273-a1211962fb70" containerName="extract-content" Nov 27 18:24:20 crc kubenswrapper[4792]: E1127 18:24:20.056429 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32eadec3-0815-4902-a273-a1211962fb70" containerName="registry-server" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.056435 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="32eadec3-0815-4902-a273-a1211962fb70" containerName="registry-server" Nov 27 18:24:20 crc kubenswrapper[4792]: E1127 18:24:20.056450 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32eadec3-0815-4902-a273-a1211962fb70" containerName="extract-utilities" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.056458 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="32eadec3-0815-4902-a273-a1211962fb70" containerName="extract-utilities" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.056721 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="32eadec3-0815-4902-a273-a1211962fb70" containerName="registry-server" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.058422 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.078066 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bcbg5"] Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.110972 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/389428df-0609-4420-8507-56f9b1418bbc-catalog-content\") pod \"community-operators-bcbg5\" (UID: \"389428df-0609-4420-8507-56f9b1418bbc\") " pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.111033 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfthf\" (UniqueName: \"kubernetes.io/projected/389428df-0609-4420-8507-56f9b1418bbc-kube-api-access-jfthf\") pod \"community-operators-bcbg5\" (UID: \"389428df-0609-4420-8507-56f9b1418bbc\") " pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.111111 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/389428df-0609-4420-8507-56f9b1418bbc-utilities\") pod \"community-operators-bcbg5\" (UID: \"389428df-0609-4420-8507-56f9b1418bbc\") " pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.213126 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/389428df-0609-4420-8507-56f9b1418bbc-catalog-content\") pod \"community-operators-bcbg5\" (UID: \"389428df-0609-4420-8507-56f9b1418bbc\") " pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.213186 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfthf\" (UniqueName: \"kubernetes.io/projected/389428df-0609-4420-8507-56f9b1418bbc-kube-api-access-jfthf\") pod \"community-operators-bcbg5\" (UID: \"389428df-0609-4420-8507-56f9b1418bbc\") " pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.213235 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/389428df-0609-4420-8507-56f9b1418bbc-utilities\") pod \"community-operators-bcbg5\" (UID: \"389428df-0609-4420-8507-56f9b1418bbc\") " pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.213857 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/389428df-0609-4420-8507-56f9b1418bbc-utilities\") pod \"community-operators-bcbg5\" (UID: \"389428df-0609-4420-8507-56f9b1418bbc\") " pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.213968 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/389428df-0609-4420-8507-56f9b1418bbc-catalog-content\") pod \"community-operators-bcbg5\" (UID: \"389428df-0609-4420-8507-56f9b1418bbc\") " pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.232580 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfthf\" (UniqueName: \"kubernetes.io/projected/389428df-0609-4420-8507-56f9b1418bbc-kube-api-access-jfthf\") pod \"community-operators-bcbg5\" (UID: \"389428df-0609-4420-8507-56f9b1418bbc\") " pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.379801 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:20 crc kubenswrapper[4792]: I1127 18:24:20.973037 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bcbg5"] Nov 27 18:24:21 crc kubenswrapper[4792]: I1127 18:24:21.412251 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcbg5" event={"ID":"389428df-0609-4420-8507-56f9b1418bbc","Type":"ContainerStarted","Data":"56f80cc41f63a90b041cee8716a11f922d20e32bc8df7e0b798498e03ae8b477"} Nov 27 18:24:22 crc kubenswrapper[4792]: I1127 18:24:22.427024 4792 generic.go:334] "Generic (PLEG): container finished" podID="389428df-0609-4420-8507-56f9b1418bbc" containerID="18cab8e987c59fad4cfd8065fd46a91a8e4140cebd5d0cc99609d3c26facd3ff" exitCode=0 Nov 27 18:24:22 crc kubenswrapper[4792]: I1127 18:24:22.427376 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcbg5" event={"ID":"389428df-0609-4420-8507-56f9b1418bbc","Type":"ContainerDied","Data":"18cab8e987c59fad4cfd8065fd46a91a8e4140cebd5d0cc99609d3c26facd3ff"} Nov 27 18:24:24 crc kubenswrapper[4792]: I1127 18:24:24.450124 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcbg5" event={"ID":"389428df-0609-4420-8507-56f9b1418bbc","Type":"ContainerStarted","Data":"a8ec5a70c72371682fce41e9fcbb4c5ffa55ebc4eefc35df7b90db1cf7fa87ac"} Nov 27 18:24:25 crc kubenswrapper[4792]: I1127 18:24:25.481181 4792 generic.go:334] "Generic (PLEG): container finished" podID="389428df-0609-4420-8507-56f9b1418bbc" containerID="a8ec5a70c72371682fce41e9fcbb4c5ffa55ebc4eefc35df7b90db1cf7fa87ac" exitCode=0 Nov 27 18:24:25 crc kubenswrapper[4792]: I1127 18:24:25.481251 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcbg5" event={"ID":"389428df-0609-4420-8507-56f9b1418bbc","Type":"ContainerDied","Data":"a8ec5a70c72371682fce41e9fcbb4c5ffa55ebc4eefc35df7b90db1cf7fa87ac"} Nov 27 18:24:26 crc kubenswrapper[4792]: I1127 18:24:26.495636 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcbg5" event={"ID":"389428df-0609-4420-8507-56f9b1418bbc","Type":"ContainerStarted","Data":"4f16c47ec4be85a9b66c72523126eb1471b70e15c98a81f4630ca9cfc38bd4da"} Nov 27 18:24:26 crc kubenswrapper[4792]: I1127 18:24:26.524374 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bcbg5" podStartSLOduration=2.922001128 podStartE2EDuration="6.524350443s" podCreationTimestamp="2025-11-27 18:24:20 +0000 UTC" firstStartedPulling="2025-11-27 18:24:22.432201496 +0000 UTC m=+4484.775027824" lastFinishedPulling="2025-11-27 18:24:26.034550821 +0000 UTC m=+4488.377377139" observedRunningTime="2025-11-27 18:24:26.514234922 +0000 UTC m=+4488.857061260" watchObservedRunningTime="2025-11-27 18:24:26.524350443 +0000 UTC m=+4488.867176761" Nov 27 18:24:26 crc kubenswrapper[4792]: I1127 18:24:26.687593 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:24:26 crc kubenswrapper[4792]: E1127 18:24:26.687972 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:24:30 crc kubenswrapper[4792]: I1127 18:24:30.380343 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:30 crc kubenswrapper[4792]: I1127 18:24:30.380992 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:30 crc kubenswrapper[4792]: I1127 18:24:30.433040 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:40 crc kubenswrapper[4792]: I1127 18:24:40.438624 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:40 crc kubenswrapper[4792]: I1127 18:24:40.490359 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bcbg5"] Nov 27 18:24:40 crc kubenswrapper[4792]: I1127 18:24:40.659514 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bcbg5" podUID="389428df-0609-4420-8507-56f9b1418bbc" containerName="registry-server" containerID="cri-o://4f16c47ec4be85a9b66c72523126eb1471b70e15c98a81f4630ca9cfc38bd4da" gracePeriod=2 Nov 27 18:24:41 crc kubenswrapper[4792]: I1127 18:24:41.680019 4792 generic.go:334] "Generic (PLEG): container finished" podID="389428df-0609-4420-8507-56f9b1418bbc" containerID="4f16c47ec4be85a9b66c72523126eb1471b70e15c98a81f4630ca9cfc38bd4da" exitCode=0 Nov 27 18:24:41 crc kubenswrapper[4792]: I1127 18:24:41.680553 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcbg5" event={"ID":"389428df-0609-4420-8507-56f9b1418bbc","Type":"ContainerDied","Data":"4f16c47ec4be85a9b66c72523126eb1471b70e15c98a81f4630ca9cfc38bd4da"} Nov 27 18:24:41 crc kubenswrapper[4792]: I1127 18:24:41.680587 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcbg5" event={"ID":"389428df-0609-4420-8507-56f9b1418bbc","Type":"ContainerDied","Data":"56f80cc41f63a90b041cee8716a11f922d20e32bc8df7e0b798498e03ae8b477"} Nov 27 18:24:41 crc kubenswrapper[4792]: I1127 18:24:41.680603 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56f80cc41f63a90b041cee8716a11f922d20e32bc8df7e0b798498e03ae8b477" Nov 27 18:24:41 crc kubenswrapper[4792]: I1127 18:24:41.686963 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:24:41 crc kubenswrapper[4792]: E1127 18:24:41.687201 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:24:41 crc kubenswrapper[4792]: I1127 18:24:41.701070 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:41 crc kubenswrapper[4792]: I1127 18:24:41.765012 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/389428df-0609-4420-8507-56f9b1418bbc-catalog-content\") pod \"389428df-0609-4420-8507-56f9b1418bbc\" (UID: \"389428df-0609-4420-8507-56f9b1418bbc\") " Nov 27 18:24:41 crc kubenswrapper[4792]: I1127 18:24:41.765079 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/389428df-0609-4420-8507-56f9b1418bbc-utilities\") pod \"389428df-0609-4420-8507-56f9b1418bbc\" (UID: \"389428df-0609-4420-8507-56f9b1418bbc\") " Nov 27 18:24:41 crc kubenswrapper[4792]: I1127 18:24:41.765271 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfthf\" (UniqueName: \"kubernetes.io/projected/389428df-0609-4420-8507-56f9b1418bbc-kube-api-access-jfthf\") pod \"389428df-0609-4420-8507-56f9b1418bbc\" (UID: \"389428df-0609-4420-8507-56f9b1418bbc\") " Nov 27 18:24:41 crc kubenswrapper[4792]: I1127 18:24:41.766866 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/389428df-0609-4420-8507-56f9b1418bbc-utilities" (OuterVolumeSpecName: "utilities") pod "389428df-0609-4420-8507-56f9b1418bbc" (UID: "389428df-0609-4420-8507-56f9b1418bbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:24:41 crc kubenswrapper[4792]: I1127 18:24:41.774836 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/389428df-0609-4420-8507-56f9b1418bbc-kube-api-access-jfthf" (OuterVolumeSpecName: "kube-api-access-jfthf") pod "389428df-0609-4420-8507-56f9b1418bbc" (UID: "389428df-0609-4420-8507-56f9b1418bbc"). InnerVolumeSpecName "kube-api-access-jfthf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:24:41 crc kubenswrapper[4792]: I1127 18:24:41.807432 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/389428df-0609-4420-8507-56f9b1418bbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "389428df-0609-4420-8507-56f9b1418bbc" (UID: "389428df-0609-4420-8507-56f9b1418bbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:24:41 crc kubenswrapper[4792]: I1127 18:24:41.868534 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/389428df-0609-4420-8507-56f9b1418bbc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:24:41 crc kubenswrapper[4792]: I1127 18:24:41.868566 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/389428df-0609-4420-8507-56f9b1418bbc-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:24:41 crc kubenswrapper[4792]: I1127 18:24:41.868577 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfthf\" (UniqueName: \"kubernetes.io/projected/389428df-0609-4420-8507-56f9b1418bbc-kube-api-access-jfthf\") on node \"crc\" DevicePath \"\"" Nov 27 18:24:42 crc kubenswrapper[4792]: I1127 18:24:42.692668 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcbg5" Nov 27 18:24:42 crc kubenswrapper[4792]: I1127 18:24:42.748109 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bcbg5"] Nov 27 18:24:42 crc kubenswrapper[4792]: I1127 18:24:42.760716 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bcbg5"] Nov 27 18:24:44 crc kubenswrapper[4792]: I1127 18:24:44.699413 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="389428df-0609-4420-8507-56f9b1418bbc" path="/var/lib/kubelet/pods/389428df-0609-4420-8507-56f9b1418bbc/volumes" Nov 27 18:24:53 crc kubenswrapper[4792]: I1127 18:24:53.687339 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:24:53 crc kubenswrapper[4792]: E1127 18:24:53.688275 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:25:08 crc kubenswrapper[4792]: I1127 18:25:08.694500 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:25:08 crc kubenswrapper[4792]: E1127 18:25:08.695532 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:25:21 crc kubenswrapper[4792]: I1127 18:25:21.688595 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:25:21 crc kubenswrapper[4792]: E1127 18:25:21.689782 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:25:33 crc kubenswrapper[4792]: I1127 18:25:33.687179 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:25:33 crc kubenswrapper[4792]: E1127 18:25:33.688258 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:25:47 crc kubenswrapper[4792]: I1127 18:25:47.687731 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:25:47 crc kubenswrapper[4792]: E1127 18:25:47.689153 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:25:59 crc kubenswrapper[4792]: I1127 18:25:59.687587 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:25:59 crc kubenswrapper[4792]: E1127 18:25:59.688713 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.555917 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 27 18:26:06 crc kubenswrapper[4792]: E1127 18:26:06.557120 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="389428df-0609-4420-8507-56f9b1418bbc" containerName="extract-utilities" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.557139 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="389428df-0609-4420-8507-56f9b1418bbc" containerName="extract-utilities" Nov 27 18:26:06 crc kubenswrapper[4792]: E1127 18:26:06.557154 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="389428df-0609-4420-8507-56f9b1418bbc" containerName="registry-server" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.557161 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="389428df-0609-4420-8507-56f9b1418bbc" containerName="registry-server" Nov 27 18:26:06 crc kubenswrapper[4792]: E1127 18:26:06.557178 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="389428df-0609-4420-8507-56f9b1418bbc" containerName="extract-content" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.557358 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="389428df-0609-4420-8507-56f9b1418bbc" containerName="extract-content" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.557698 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="389428df-0609-4420-8507-56f9b1418bbc" containerName="registry-server" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.558791 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.563045 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.563189 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.563374 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wrltw" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.563490 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.568457 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.618332 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57348a1d-d6f9-4844-894d-b837afec3bdc-config-data\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.618420 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/57348a1d-d6f9-4844-894d-b837afec3bdc-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.618563 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.722224 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm5z6\" (UniqueName: \"kubernetes.io/projected/57348a1d-d6f9-4844-894d-b837afec3bdc-kube-api-access-hm5z6\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.722329 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/57348a1d-d6f9-4844-894d-b837afec3bdc-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.722440 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.722467 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/57348a1d-d6f9-4844-894d-b837afec3bdc-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.722491 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/57348a1d-d6f9-4844-894d-b837afec3bdc-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.722528 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.722548 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.722589 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.722694 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57348a1d-d6f9-4844-894d-b837afec3bdc-config-data\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.724719 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57348a1d-d6f9-4844-894d-b837afec3bdc-config-data\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.725060 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/57348a1d-d6f9-4844-894d-b837afec3bdc-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.824890 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm5z6\" (UniqueName: \"kubernetes.io/projected/57348a1d-d6f9-4844-894d-b837afec3bdc-kube-api-access-hm5z6\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.825073 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.825110 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/57348a1d-d6f9-4844-894d-b837afec3bdc-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.825139 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/57348a1d-d6f9-4844-894d-b837afec3bdc-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.825188 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.825376 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.825636 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/57348a1d-d6f9-4844-894d-b837afec3bdc-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.825687 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/57348a1d-d6f9-4844-894d-b837afec3bdc-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.832346 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.875862 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.876106 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.876275 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.879382 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm5z6\" (UniqueName: \"kubernetes.io/projected/57348a1d-d6f9-4844-894d-b837afec3bdc-kube-api-access-hm5z6\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:06 crc kubenswrapper[4792]: I1127 18:26:06.911438 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " pod="openstack/tempest-tests-tempest" Nov 27 18:26:07 crc kubenswrapper[4792]: I1127 18:26:07.186289 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 27 18:26:07 crc kubenswrapper[4792]: I1127 18:26:07.709367 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 27 18:26:07 crc kubenswrapper[4792]: I1127 18:26:07.712781 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 18:26:08 crc kubenswrapper[4792]: I1127 18:26:08.658167 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"57348a1d-d6f9-4844-894d-b837afec3bdc","Type":"ContainerStarted","Data":"6ca97e342409cd3104a14f9e8f5962beda401a70d39da3dc4464786ed09e0177"} Nov 27 18:26:12 crc kubenswrapper[4792]: I1127 18:26:12.687719 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:26:12 crc kubenswrapper[4792]: E1127 18:26:12.688679 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:26:24 crc kubenswrapper[4792]: I1127 18:26:24.242579 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mtx8s"] Nov 27 18:26:24 crc kubenswrapper[4792]: I1127 18:26:24.246235 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:26:24 crc kubenswrapper[4792]: I1127 18:26:24.258792 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mtx8s"] Nov 27 18:26:24 crc kubenswrapper[4792]: I1127 18:26:24.280945 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ff38a1-8a34-4c58-8eab-1de80e37e531-catalog-content\") pod \"certified-operators-mtx8s\" (UID: \"91ff38a1-8a34-4c58-8eab-1de80e37e531\") " pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:26:24 crc kubenswrapper[4792]: I1127 18:26:24.281536 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkpch\" (UniqueName: \"kubernetes.io/projected/91ff38a1-8a34-4c58-8eab-1de80e37e531-kube-api-access-qkpch\") pod \"certified-operators-mtx8s\" (UID: \"91ff38a1-8a34-4c58-8eab-1de80e37e531\") " pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:26:24 crc kubenswrapper[4792]: I1127 18:26:24.281591 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ff38a1-8a34-4c58-8eab-1de80e37e531-utilities\") pod \"certified-operators-mtx8s\" (UID: \"91ff38a1-8a34-4c58-8eab-1de80e37e531\") " pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:26:24 crc kubenswrapper[4792]: I1127 18:26:24.384947 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkpch\" (UniqueName: \"kubernetes.io/projected/91ff38a1-8a34-4c58-8eab-1de80e37e531-kube-api-access-qkpch\") pod \"certified-operators-mtx8s\" (UID: \"91ff38a1-8a34-4c58-8eab-1de80e37e531\") " pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:26:24 crc kubenswrapper[4792]: I1127 18:26:24.385058 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ff38a1-8a34-4c58-8eab-1de80e37e531-utilities\") pod \"certified-operators-mtx8s\" (UID: \"91ff38a1-8a34-4c58-8eab-1de80e37e531\") " pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:26:24 crc kubenswrapper[4792]: I1127 18:26:24.385163 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ff38a1-8a34-4c58-8eab-1de80e37e531-catalog-content\") pod \"certified-operators-mtx8s\" (UID: \"91ff38a1-8a34-4c58-8eab-1de80e37e531\") " pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:26:24 crc kubenswrapper[4792]: I1127 18:26:24.385659 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ff38a1-8a34-4c58-8eab-1de80e37e531-utilities\") pod \"certified-operators-mtx8s\" (UID: \"91ff38a1-8a34-4c58-8eab-1de80e37e531\") " pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:26:24 crc kubenswrapper[4792]: I1127 18:26:24.385693 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ff38a1-8a34-4c58-8eab-1de80e37e531-catalog-content\") pod \"certified-operators-mtx8s\" (UID: \"91ff38a1-8a34-4c58-8eab-1de80e37e531\") " pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:26:24 crc kubenswrapper[4792]: I1127 18:26:24.410986 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkpch\" (UniqueName: \"kubernetes.io/projected/91ff38a1-8a34-4c58-8eab-1de80e37e531-kube-api-access-qkpch\") pod \"certified-operators-mtx8s\" (UID: \"91ff38a1-8a34-4c58-8eab-1de80e37e531\") " pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:26:24 crc kubenswrapper[4792]: I1127 18:26:24.592500 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:26:27 crc kubenswrapper[4792]: I1127 18:26:27.687136 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:26:27 crc kubenswrapper[4792]: E1127 18:26:27.688004 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:26:39 crc kubenswrapper[4792]: I1127 18:26:39.687047 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:26:52 crc kubenswrapper[4792]: E1127 18:26:52.426242 4792 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 27 18:26:52 crc kubenswrapper[4792]: E1127 18:26:52.428723 4792 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hm5z6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(57348a1d-d6f9-4844-894d-b837afec3bdc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 27 18:26:52 crc kubenswrapper[4792]: E1127 18:26:52.430284 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="57348a1d-d6f9-4844-894d-b837afec3bdc" Nov 27 18:26:52 crc kubenswrapper[4792]: I1127 18:26:52.979403 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mtx8s"] Nov 27 18:26:52 crc kubenswrapper[4792]: W1127 18:26:52.981960 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91ff38a1_8a34_4c58_8eab_1de80e37e531.slice/crio-1650e3b2354070daffdda497e682c6b053a47a9e03ca93e6b331c6f4d5c02783 WatchSource:0}: Error finding container 1650e3b2354070daffdda497e682c6b053a47a9e03ca93e6b331c6f4d5c02783: Status 404 returned error can't find the container with id 1650e3b2354070daffdda497e682c6b053a47a9e03ca93e6b331c6f4d5c02783 Nov 27 18:26:53 crc kubenswrapper[4792]: I1127 18:26:53.180409 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtx8s" event={"ID":"91ff38a1-8a34-4c58-8eab-1de80e37e531","Type":"ContainerStarted","Data":"1650e3b2354070daffdda497e682c6b053a47a9e03ca93e6b331c6f4d5c02783"} Nov 27 18:26:53 crc kubenswrapper[4792]: I1127 18:26:53.185232 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"788d9f3308639f1eb59ae24cebd2145498b437261ae5691516ec56d907d9778f"} Nov 27 18:26:53 crc kubenswrapper[4792]: E1127 18:26:53.187378 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="57348a1d-d6f9-4844-894d-b837afec3bdc" Nov 27 18:26:54 crc kubenswrapper[4792]: I1127 18:26:54.196356 4792 generic.go:334] "Generic (PLEG): container finished" podID="91ff38a1-8a34-4c58-8eab-1de80e37e531" containerID="545e19fb4b6206deedc3a6f67b117e8fdc9b2930ff3a8925691acd6609672f9e" exitCode=0 Nov 27 18:26:54 crc kubenswrapper[4792]: I1127 18:26:54.197007 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtx8s" event={"ID":"91ff38a1-8a34-4c58-8eab-1de80e37e531","Type":"ContainerDied","Data":"545e19fb4b6206deedc3a6f67b117e8fdc9b2930ff3a8925691acd6609672f9e"} Nov 27 18:26:56 crc kubenswrapper[4792]: I1127 18:26:56.219561 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtx8s" event={"ID":"91ff38a1-8a34-4c58-8eab-1de80e37e531","Type":"ContainerStarted","Data":"6ef7863ceaa92c2b27f3146840dc45529d112caee0ed90b9ded39c61c54eb92d"} Nov 27 18:27:01 crc kubenswrapper[4792]: I1127 18:27:01.273859 4792 generic.go:334] "Generic (PLEG): container finished" podID="91ff38a1-8a34-4c58-8eab-1de80e37e531" containerID="6ef7863ceaa92c2b27f3146840dc45529d112caee0ed90b9ded39c61c54eb92d" exitCode=0 Nov 27 18:27:01 crc kubenswrapper[4792]: I1127 18:27:01.273906 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtx8s" event={"ID":"91ff38a1-8a34-4c58-8eab-1de80e37e531","Type":"ContainerDied","Data":"6ef7863ceaa92c2b27f3146840dc45529d112caee0ed90b9ded39c61c54eb92d"} Nov 27 18:27:03 crc kubenswrapper[4792]: I1127 18:27:03.304803 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtx8s" event={"ID":"91ff38a1-8a34-4c58-8eab-1de80e37e531","Type":"ContainerStarted","Data":"6da75a435dd4ec0e93149592e771a44a0473c49bc045bf9f60cd4a5224453600"} Nov 27 18:27:03 crc kubenswrapper[4792]: I1127 18:27:03.346317 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mtx8s" podStartSLOduration=31.684303435 podStartE2EDuration="39.34628887s" podCreationTimestamp="2025-11-27 18:26:24 +0000 UTC" firstStartedPulling="2025-11-27 18:26:54.202940989 +0000 UTC m=+4636.545767307" lastFinishedPulling="2025-11-27 18:27:01.864926424 +0000 UTC m=+4644.207752742" observedRunningTime="2025-11-27 18:27:03.328846616 +0000 UTC m=+4645.671672944" watchObservedRunningTime="2025-11-27 18:27:03.34628887 +0000 UTC m=+4645.689115188" Nov 27 18:27:04 crc kubenswrapper[4792]: I1127 18:27:04.593667 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:27:04 crc kubenswrapper[4792]: I1127 18:27:04.595909 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:27:04 crc kubenswrapper[4792]: I1127 18:27:04.648567 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:27:08 crc kubenswrapper[4792]: I1127 18:27:08.842544 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 27 18:27:11 crc kubenswrapper[4792]: I1127 18:27:11.403243 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"57348a1d-d6f9-4844-894d-b837afec3bdc","Type":"ContainerStarted","Data":"83efb2f6992c05c48118ee9c183ecc05db85a14aca0770e34c35636faf66b39d"} Nov 27 18:27:11 crc kubenswrapper[4792]: I1127 18:27:11.423564 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.295829499 podStartE2EDuration="1m6.423549334s" podCreationTimestamp="2025-11-27 18:26:05 +0000 UTC" firstStartedPulling="2025-11-27 18:26:07.71254505 +0000 UTC m=+4590.055371368" lastFinishedPulling="2025-11-27 18:27:08.840264885 +0000 UTC m=+4651.183091203" observedRunningTime="2025-11-27 18:27:11.419546405 +0000 UTC m=+4653.762372723" watchObservedRunningTime="2025-11-27 18:27:11.423549334 +0000 UTC m=+4653.766375642" Nov 27 18:27:14 crc kubenswrapper[4792]: I1127 18:27:14.647101 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:27:14 crc kubenswrapper[4792]: I1127 18:27:14.709541 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mtx8s"] Nov 27 18:27:15 crc kubenswrapper[4792]: I1127 18:27:15.453950 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mtx8s" podUID="91ff38a1-8a34-4c58-8eab-1de80e37e531" containerName="registry-server" containerID="cri-o://6da75a435dd4ec0e93149592e771a44a0473c49bc045bf9f60cd4a5224453600" gracePeriod=2 Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.065970 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.220766 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ff38a1-8a34-4c58-8eab-1de80e37e531-utilities\") pod \"91ff38a1-8a34-4c58-8eab-1de80e37e531\" (UID: \"91ff38a1-8a34-4c58-8eab-1de80e37e531\") " Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.220955 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkpch\" (UniqueName: \"kubernetes.io/projected/91ff38a1-8a34-4c58-8eab-1de80e37e531-kube-api-access-qkpch\") pod \"91ff38a1-8a34-4c58-8eab-1de80e37e531\" (UID: \"91ff38a1-8a34-4c58-8eab-1de80e37e531\") " Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.221305 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ff38a1-8a34-4c58-8eab-1de80e37e531-catalog-content\") pod \"91ff38a1-8a34-4c58-8eab-1de80e37e531\" (UID: \"91ff38a1-8a34-4c58-8eab-1de80e37e531\") " Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.221447 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ff38a1-8a34-4c58-8eab-1de80e37e531-utilities" (OuterVolumeSpecName: "utilities") pod "91ff38a1-8a34-4c58-8eab-1de80e37e531" (UID: "91ff38a1-8a34-4c58-8eab-1de80e37e531"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.222143 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ff38a1-8a34-4c58-8eab-1de80e37e531-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.228324 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ff38a1-8a34-4c58-8eab-1de80e37e531-kube-api-access-qkpch" (OuterVolumeSpecName: "kube-api-access-qkpch") pod "91ff38a1-8a34-4c58-8eab-1de80e37e531" (UID: "91ff38a1-8a34-4c58-8eab-1de80e37e531"). InnerVolumeSpecName "kube-api-access-qkpch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.266402 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ff38a1-8a34-4c58-8eab-1de80e37e531-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91ff38a1-8a34-4c58-8eab-1de80e37e531" (UID: "91ff38a1-8a34-4c58-8eab-1de80e37e531"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.323787 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ff38a1-8a34-4c58-8eab-1de80e37e531-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.323822 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkpch\" (UniqueName: \"kubernetes.io/projected/91ff38a1-8a34-4c58-8eab-1de80e37e531-kube-api-access-qkpch\") on node \"crc\" DevicePath \"\"" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.470104 4792 generic.go:334] "Generic (PLEG): container finished" podID="91ff38a1-8a34-4c58-8eab-1de80e37e531" containerID="6da75a435dd4ec0e93149592e771a44a0473c49bc045bf9f60cd4a5224453600" exitCode=0 Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.470160 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtx8s" event={"ID":"91ff38a1-8a34-4c58-8eab-1de80e37e531","Type":"ContainerDied","Data":"6da75a435dd4ec0e93149592e771a44a0473c49bc045bf9f60cd4a5224453600"} Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.470193 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtx8s" event={"ID":"91ff38a1-8a34-4c58-8eab-1de80e37e531","Type":"ContainerDied","Data":"1650e3b2354070daffdda497e682c6b053a47a9e03ca93e6b331c6f4d5c02783"} Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.470214 4792 scope.go:117] "RemoveContainer" containerID="6da75a435dd4ec0e93149592e771a44a0473c49bc045bf9f60cd4a5224453600" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.470248 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtx8s" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.516141 4792 scope.go:117] "RemoveContainer" containerID="6ef7863ceaa92c2b27f3146840dc45529d112caee0ed90b9ded39c61c54eb92d" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.527495 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mtx8s"] Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.543281 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mtx8s"] Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.549480 4792 scope.go:117] "RemoveContainer" containerID="545e19fb4b6206deedc3a6f67b117e8fdc9b2930ff3a8925691acd6609672f9e" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.599208 4792 scope.go:117] "RemoveContainer" containerID="6da75a435dd4ec0e93149592e771a44a0473c49bc045bf9f60cd4a5224453600" Nov 27 18:27:16 crc kubenswrapper[4792]: E1127 18:27:16.599661 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6da75a435dd4ec0e93149592e771a44a0473c49bc045bf9f60cd4a5224453600\": container with ID starting with 6da75a435dd4ec0e93149592e771a44a0473c49bc045bf9f60cd4a5224453600 not found: ID does not exist" containerID="6da75a435dd4ec0e93149592e771a44a0473c49bc045bf9f60cd4a5224453600" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.599699 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6da75a435dd4ec0e93149592e771a44a0473c49bc045bf9f60cd4a5224453600"} err="failed to get container status \"6da75a435dd4ec0e93149592e771a44a0473c49bc045bf9f60cd4a5224453600\": rpc error: code = NotFound desc = could not find container \"6da75a435dd4ec0e93149592e771a44a0473c49bc045bf9f60cd4a5224453600\": container with ID starting with 6da75a435dd4ec0e93149592e771a44a0473c49bc045bf9f60cd4a5224453600 not found: ID does not exist" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.599720 4792 scope.go:117] "RemoveContainer" containerID="6ef7863ceaa92c2b27f3146840dc45529d112caee0ed90b9ded39c61c54eb92d" Nov 27 18:27:16 crc kubenswrapper[4792]: E1127 18:27:16.599946 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ef7863ceaa92c2b27f3146840dc45529d112caee0ed90b9ded39c61c54eb92d\": container with ID starting with 6ef7863ceaa92c2b27f3146840dc45529d112caee0ed90b9ded39c61c54eb92d not found: ID does not exist" containerID="6ef7863ceaa92c2b27f3146840dc45529d112caee0ed90b9ded39c61c54eb92d" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.599970 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef7863ceaa92c2b27f3146840dc45529d112caee0ed90b9ded39c61c54eb92d"} err="failed to get container status \"6ef7863ceaa92c2b27f3146840dc45529d112caee0ed90b9ded39c61c54eb92d\": rpc error: code = NotFound desc = could not find container \"6ef7863ceaa92c2b27f3146840dc45529d112caee0ed90b9ded39c61c54eb92d\": container with ID starting with 6ef7863ceaa92c2b27f3146840dc45529d112caee0ed90b9ded39c61c54eb92d not found: ID does not exist" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.599982 4792 scope.go:117] "RemoveContainer" containerID="545e19fb4b6206deedc3a6f67b117e8fdc9b2930ff3a8925691acd6609672f9e" Nov 27 18:27:16 crc kubenswrapper[4792]: E1127 18:27:16.600215 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"545e19fb4b6206deedc3a6f67b117e8fdc9b2930ff3a8925691acd6609672f9e\": container with ID starting with 545e19fb4b6206deedc3a6f67b117e8fdc9b2930ff3a8925691acd6609672f9e not found: ID does not exist" containerID="545e19fb4b6206deedc3a6f67b117e8fdc9b2930ff3a8925691acd6609672f9e" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.600237 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"545e19fb4b6206deedc3a6f67b117e8fdc9b2930ff3a8925691acd6609672f9e"} err="failed to get container status \"545e19fb4b6206deedc3a6f67b117e8fdc9b2930ff3a8925691acd6609672f9e\": rpc error: code = NotFound desc = could not find container \"545e19fb4b6206deedc3a6f67b117e8fdc9b2930ff3a8925691acd6609672f9e\": container with ID starting with 545e19fb4b6206deedc3a6f67b117e8fdc9b2930ff3a8925691acd6609672f9e not found: ID does not exist" Nov 27 18:27:16 crc kubenswrapper[4792]: I1127 18:27:16.703008 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ff38a1-8a34-4c58-8eab-1de80e37e531" path="/var/lib/kubelet/pods/91ff38a1-8a34-4c58-8eab-1de80e37e531/volumes" Nov 27 18:29:08 crc kubenswrapper[4792]: I1127 18:29:08.303198 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:29:08 crc kubenswrapper[4792]: I1127 18:29:08.307978 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:29:38 crc kubenswrapper[4792]: I1127 18:29:38.290362 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:29:38 crc kubenswrapper[4792]: I1127 18:29:38.290891 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.649232 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc"] Nov 27 18:30:00 crc kubenswrapper[4792]: E1127 18:30:00.657667 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ff38a1-8a34-4c58-8eab-1de80e37e531" containerName="extract-utilities" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.659529 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ff38a1-8a34-4c58-8eab-1de80e37e531" containerName="extract-utilities" Nov 27 18:30:00 crc kubenswrapper[4792]: E1127 18:30:00.659993 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ff38a1-8a34-4c58-8eab-1de80e37e531" containerName="registry-server" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.660009 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ff38a1-8a34-4c58-8eab-1de80e37e531" containerName="registry-server" Nov 27 18:30:00 crc kubenswrapper[4792]: E1127 18:30:00.660041 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ff38a1-8a34-4c58-8eab-1de80e37e531" containerName="extract-content" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.660047 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ff38a1-8a34-4c58-8eab-1de80e37e531" containerName="extract-content" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.660929 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ff38a1-8a34-4c58-8eab-1de80e37e531" containerName="registry-server" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.667147 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.681259 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.681285 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.765724 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc"] Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.854668 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n4s7\" (UniqueName: \"kubernetes.io/projected/d4c29d3b-032d-4508-bb12-cbb51d835a23-kube-api-access-2n4s7\") pod \"collect-profiles-29404470-2frmc\" (UID: \"d4c29d3b-032d-4508-bb12-cbb51d835a23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.854898 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4c29d3b-032d-4508-bb12-cbb51d835a23-config-volume\") pod \"collect-profiles-29404470-2frmc\" (UID: \"d4c29d3b-032d-4508-bb12-cbb51d835a23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.854989 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4c29d3b-032d-4508-bb12-cbb51d835a23-secret-volume\") pod \"collect-profiles-29404470-2frmc\" (UID: \"d4c29d3b-032d-4508-bb12-cbb51d835a23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.957115 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4c29d3b-032d-4508-bb12-cbb51d835a23-config-volume\") pod \"collect-profiles-29404470-2frmc\" (UID: \"d4c29d3b-032d-4508-bb12-cbb51d835a23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.957174 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4c29d3b-032d-4508-bb12-cbb51d835a23-secret-volume\") pod \"collect-profiles-29404470-2frmc\" (UID: \"d4c29d3b-032d-4508-bb12-cbb51d835a23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.957329 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n4s7\" (UniqueName: \"kubernetes.io/projected/d4c29d3b-032d-4508-bb12-cbb51d835a23-kube-api-access-2n4s7\") pod \"collect-profiles-29404470-2frmc\" (UID: \"d4c29d3b-032d-4508-bb12-cbb51d835a23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.966961 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4c29d3b-032d-4508-bb12-cbb51d835a23-config-volume\") pod \"collect-profiles-29404470-2frmc\" (UID: \"d4c29d3b-032d-4508-bb12-cbb51d835a23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.982675 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4c29d3b-032d-4508-bb12-cbb51d835a23-secret-volume\") pod \"collect-profiles-29404470-2frmc\" (UID: \"d4c29d3b-032d-4508-bb12-cbb51d835a23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" Nov 27 18:30:00 crc kubenswrapper[4792]: I1127 18:30:00.988568 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n4s7\" (UniqueName: \"kubernetes.io/projected/d4c29d3b-032d-4508-bb12-cbb51d835a23-kube-api-access-2n4s7\") pod \"collect-profiles-29404470-2frmc\" (UID: \"d4c29d3b-032d-4508-bb12-cbb51d835a23\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" Nov 27 18:30:01 crc kubenswrapper[4792]: I1127 18:30:01.008894 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" Nov 27 18:30:02 crc kubenswrapper[4792]: I1127 18:30:02.603104 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc"] Nov 27 18:30:02 crc kubenswrapper[4792]: W1127 18:30:02.623266 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4c29d3b_032d_4508_bb12_cbb51d835a23.slice/crio-cdf77b02cbe761536a2fb5003fa1e5d66c6f8ab441b11575532de0d9e93f640a WatchSource:0}: Error finding container cdf77b02cbe761536a2fb5003fa1e5d66c6f8ab441b11575532de0d9e93f640a: Status 404 returned error can't find the container with id cdf77b02cbe761536a2fb5003fa1e5d66c6f8ab441b11575532de0d9e93f640a Nov 27 18:30:03 crc kubenswrapper[4792]: I1127 18:30:03.382730 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" event={"ID":"d4c29d3b-032d-4508-bb12-cbb51d835a23","Type":"ContainerStarted","Data":"ed484677136d3e2fada79c8af369fcbbdf6831849e4d70cd085dc3121a83c1e0"} Nov 27 18:30:03 crc kubenswrapper[4792]: I1127 18:30:03.383059 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" event={"ID":"d4c29d3b-032d-4508-bb12-cbb51d835a23","Type":"ContainerStarted","Data":"cdf77b02cbe761536a2fb5003fa1e5d66c6f8ab441b11575532de0d9e93f640a"} Nov 27 18:30:03 crc kubenswrapper[4792]: I1127 18:30:03.410934 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" podStartSLOduration=3.408921619 podStartE2EDuration="3.408921619s" podCreationTimestamp="2025-11-27 18:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 18:30:03.398145452 +0000 UTC m=+4825.740971780" watchObservedRunningTime="2025-11-27 18:30:03.408921619 +0000 UTC m=+4825.751747937" Nov 27 18:30:04 crc kubenswrapper[4792]: I1127 18:30:04.395776 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" event={"ID":"d4c29d3b-032d-4508-bb12-cbb51d835a23","Type":"ContainerDied","Data":"ed484677136d3e2fada79c8af369fcbbdf6831849e4d70cd085dc3121a83c1e0"} Nov 27 18:30:04 crc kubenswrapper[4792]: I1127 18:30:04.396354 4792 generic.go:334] "Generic (PLEG): container finished" podID="d4c29d3b-032d-4508-bb12-cbb51d835a23" containerID="ed484677136d3e2fada79c8af369fcbbdf6831849e4d70cd085dc3121a83c1e0" exitCode=0 Nov 27 18:30:06 crc kubenswrapper[4792]: I1127 18:30:06.345947 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" Nov 27 18:30:06 crc kubenswrapper[4792]: I1127 18:30:06.423611 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" event={"ID":"d4c29d3b-032d-4508-bb12-cbb51d835a23","Type":"ContainerDied","Data":"cdf77b02cbe761536a2fb5003fa1e5d66c6f8ab441b11575532de0d9e93f640a"} Nov 27 18:30:06 crc kubenswrapper[4792]: I1127 18:30:06.424021 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdf77b02cbe761536a2fb5003fa1e5d66c6f8ab441b11575532de0d9e93f640a" Nov 27 18:30:06 crc kubenswrapper[4792]: I1127 18:30:06.424148 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404470-2frmc" Nov 27 18:30:06 crc kubenswrapper[4792]: I1127 18:30:06.520034 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n4s7\" (UniqueName: \"kubernetes.io/projected/d4c29d3b-032d-4508-bb12-cbb51d835a23-kube-api-access-2n4s7\") pod \"d4c29d3b-032d-4508-bb12-cbb51d835a23\" (UID: \"d4c29d3b-032d-4508-bb12-cbb51d835a23\") " Nov 27 18:30:06 crc kubenswrapper[4792]: I1127 18:30:06.520102 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4c29d3b-032d-4508-bb12-cbb51d835a23-secret-volume\") pod \"d4c29d3b-032d-4508-bb12-cbb51d835a23\" (UID: \"d4c29d3b-032d-4508-bb12-cbb51d835a23\") " Nov 27 18:30:06 crc kubenswrapper[4792]: I1127 18:30:06.520155 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4c29d3b-032d-4508-bb12-cbb51d835a23-config-volume\") pod \"d4c29d3b-032d-4508-bb12-cbb51d835a23\" (UID: \"d4c29d3b-032d-4508-bb12-cbb51d835a23\") " Nov 27 18:30:06 crc kubenswrapper[4792]: I1127 18:30:06.524938 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4c29d3b-032d-4508-bb12-cbb51d835a23-config-volume" (OuterVolumeSpecName: "config-volume") pod "d4c29d3b-032d-4508-bb12-cbb51d835a23" (UID: "d4c29d3b-032d-4508-bb12-cbb51d835a23"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 18:30:06 crc kubenswrapper[4792]: I1127 18:30:06.534981 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c29d3b-032d-4508-bb12-cbb51d835a23-kube-api-access-2n4s7" (OuterVolumeSpecName: "kube-api-access-2n4s7") pod "d4c29d3b-032d-4508-bb12-cbb51d835a23" (UID: "d4c29d3b-032d-4508-bb12-cbb51d835a23"). InnerVolumeSpecName "kube-api-access-2n4s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:30:06 crc kubenswrapper[4792]: I1127 18:30:06.535115 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c29d3b-032d-4508-bb12-cbb51d835a23-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d4c29d3b-032d-4508-bb12-cbb51d835a23" (UID: "d4c29d3b-032d-4508-bb12-cbb51d835a23"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:30:06 crc kubenswrapper[4792]: I1127 18:30:06.623693 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n4s7\" (UniqueName: \"kubernetes.io/projected/d4c29d3b-032d-4508-bb12-cbb51d835a23-kube-api-access-2n4s7\") on node \"crc\" DevicePath \"\"" Nov 27 18:30:06 crc kubenswrapper[4792]: I1127 18:30:06.623734 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4c29d3b-032d-4508-bb12-cbb51d835a23-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 27 18:30:06 crc kubenswrapper[4792]: I1127 18:30:06.623748 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4c29d3b-032d-4508-bb12-cbb51d835a23-config-volume\") on node \"crc\" DevicePath \"\"" Nov 27 18:30:07 crc kubenswrapper[4792]: I1127 18:30:07.474370 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv"] Nov 27 18:30:07 crc kubenswrapper[4792]: I1127 18:30:07.489155 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404425-mv5bv"] Nov 27 18:30:08 crc kubenswrapper[4792]: I1127 18:30:08.291140 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:30:08 crc kubenswrapper[4792]: I1127 18:30:08.291500 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:30:08 crc kubenswrapper[4792]: I1127 18:30:08.291545 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 18:30:08 crc kubenswrapper[4792]: I1127 18:30:08.300089 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"788d9f3308639f1eb59ae24cebd2145498b437261ae5691516ec56d907d9778f"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 18:30:08 crc kubenswrapper[4792]: I1127 18:30:08.300203 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://788d9f3308639f1eb59ae24cebd2145498b437261ae5691516ec56d907d9778f" gracePeriod=600 Nov 27 18:30:08 crc kubenswrapper[4792]: I1127 18:30:08.714983 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="177d378f-85ae-40d3-9c3f-3bfb6a40790a" path="/var/lib/kubelet/pods/177d378f-85ae-40d3-9c3f-3bfb6a40790a/volumes" Nov 27 18:30:09 crc kubenswrapper[4792]: I1127 18:30:09.463001 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="788d9f3308639f1eb59ae24cebd2145498b437261ae5691516ec56d907d9778f" exitCode=0 Nov 27 18:30:09 crc kubenswrapper[4792]: I1127 18:30:09.463101 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"788d9f3308639f1eb59ae24cebd2145498b437261ae5691516ec56d907d9778f"} Nov 27 18:30:09 crc kubenswrapper[4792]: I1127 18:30:09.463584 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187"} Nov 27 18:30:09 crc kubenswrapper[4792]: I1127 18:30:09.463611 4792 scope.go:117] "RemoveContainer" containerID="f19731dba0df7503b3eb5c59c2b9f49b106d5783c4d116c906dcf6826f78c807" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.214734 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7f56s"] Nov 27 18:30:32 crc kubenswrapper[4792]: E1127 18:30:32.216666 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c29d3b-032d-4508-bb12-cbb51d835a23" containerName="collect-profiles" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.218879 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c29d3b-032d-4508-bb12-cbb51d835a23" containerName="collect-profiles" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.224348 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c29d3b-032d-4508-bb12-cbb51d835a23" containerName="collect-profiles" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.229508 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.237160 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7f56s"] Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.347236 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b67038b2-7ac1-4452-afb1-57087ee6d37e-utilities\") pod \"redhat-marketplace-7f56s\" (UID: \"b67038b2-7ac1-4452-afb1-57087ee6d37e\") " pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.347474 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqdq6\" (UniqueName: \"kubernetes.io/projected/b67038b2-7ac1-4452-afb1-57087ee6d37e-kube-api-access-cqdq6\") pod \"redhat-marketplace-7f56s\" (UID: \"b67038b2-7ac1-4452-afb1-57087ee6d37e\") " pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.347500 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b67038b2-7ac1-4452-afb1-57087ee6d37e-catalog-content\") pod \"redhat-marketplace-7f56s\" (UID: \"b67038b2-7ac1-4452-afb1-57087ee6d37e\") " pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.438252 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cthvw"] Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.441843 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.450079 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqdq6\" (UniqueName: \"kubernetes.io/projected/b67038b2-7ac1-4452-afb1-57087ee6d37e-kube-api-access-cqdq6\") pod \"redhat-marketplace-7f56s\" (UID: \"b67038b2-7ac1-4452-afb1-57087ee6d37e\") " pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.450120 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b67038b2-7ac1-4452-afb1-57087ee6d37e-catalog-content\") pod \"redhat-marketplace-7f56s\" (UID: \"b67038b2-7ac1-4452-afb1-57087ee6d37e\") " pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.450167 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b67038b2-7ac1-4452-afb1-57087ee6d37e-utilities\") pod \"redhat-marketplace-7f56s\" (UID: \"b67038b2-7ac1-4452-afb1-57087ee6d37e\") " pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.453348 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cthvw"] Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.457035 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b67038b2-7ac1-4452-afb1-57087ee6d37e-catalog-content\") pod \"redhat-marketplace-7f56s\" (UID: \"b67038b2-7ac1-4452-afb1-57087ee6d37e\") " pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.460607 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b67038b2-7ac1-4452-afb1-57087ee6d37e-utilities\") pod \"redhat-marketplace-7f56s\" (UID: \"b67038b2-7ac1-4452-afb1-57087ee6d37e\") " pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.513589 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqdq6\" (UniqueName: \"kubernetes.io/projected/b67038b2-7ac1-4452-afb1-57087ee6d37e-kube-api-access-cqdq6\") pod \"redhat-marketplace-7f56s\" (UID: \"b67038b2-7ac1-4452-afb1-57087ee6d37e\") " pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.552775 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5d9601-7b43-4d21-bb98-baf15fde30d2-utilities\") pod \"redhat-operators-cthvw\" (UID: \"bd5d9601-7b43-4d21-bb98-baf15fde30d2\") " pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.553016 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c66j9\" (UniqueName: \"kubernetes.io/projected/bd5d9601-7b43-4d21-bb98-baf15fde30d2-kube-api-access-c66j9\") pod \"redhat-operators-cthvw\" (UID: \"bd5d9601-7b43-4d21-bb98-baf15fde30d2\") " pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.553276 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5d9601-7b43-4d21-bb98-baf15fde30d2-catalog-content\") pod \"redhat-operators-cthvw\" (UID: \"bd5d9601-7b43-4d21-bb98-baf15fde30d2\") " pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.595406 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.655227 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5d9601-7b43-4d21-bb98-baf15fde30d2-utilities\") pod \"redhat-operators-cthvw\" (UID: \"bd5d9601-7b43-4d21-bb98-baf15fde30d2\") " pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.655322 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c66j9\" (UniqueName: \"kubernetes.io/projected/bd5d9601-7b43-4d21-bb98-baf15fde30d2-kube-api-access-c66j9\") pod \"redhat-operators-cthvw\" (UID: \"bd5d9601-7b43-4d21-bb98-baf15fde30d2\") " pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.655380 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5d9601-7b43-4d21-bb98-baf15fde30d2-catalog-content\") pod \"redhat-operators-cthvw\" (UID: \"bd5d9601-7b43-4d21-bb98-baf15fde30d2\") " pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.660345 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5d9601-7b43-4d21-bb98-baf15fde30d2-catalog-content\") pod \"redhat-operators-cthvw\" (UID: \"bd5d9601-7b43-4d21-bb98-baf15fde30d2\") " pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.663469 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5d9601-7b43-4d21-bb98-baf15fde30d2-utilities\") pod \"redhat-operators-cthvw\" (UID: \"bd5d9601-7b43-4d21-bb98-baf15fde30d2\") " pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.685778 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c66j9\" (UniqueName: \"kubernetes.io/projected/bd5d9601-7b43-4d21-bb98-baf15fde30d2-kube-api-access-c66j9\") pod \"redhat-operators-cthvw\" (UID: \"bd5d9601-7b43-4d21-bb98-baf15fde30d2\") " pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:30:32 crc kubenswrapper[4792]: I1127 18:30:32.764565 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:30:33 crc kubenswrapper[4792]: I1127 18:30:33.940333 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cthvw"] Nov 27 18:30:33 crc kubenswrapper[4792]: I1127 18:30:33.951744 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7f56s"] Nov 27 18:30:34 crc kubenswrapper[4792]: I1127 18:30:34.797679 4792 generic.go:334] "Generic (PLEG): container finished" podID="b67038b2-7ac1-4452-afb1-57087ee6d37e" containerID="a97111cdeb2db6684f6894e37d434e86b27daeab454991eb690a99dcfa170d9f" exitCode=0 Nov 27 18:30:34 crc kubenswrapper[4792]: I1127 18:30:34.797770 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7f56s" event={"ID":"b67038b2-7ac1-4452-afb1-57087ee6d37e","Type":"ContainerDied","Data":"a97111cdeb2db6684f6894e37d434e86b27daeab454991eb690a99dcfa170d9f"} Nov 27 18:30:34 crc kubenswrapper[4792]: I1127 18:30:34.798044 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7f56s" event={"ID":"b67038b2-7ac1-4452-afb1-57087ee6d37e","Type":"ContainerStarted","Data":"3839f5d2db84848298e3fdaca4c107c9f698f66751f99a1da8f15be655514235"} Nov 27 18:30:34 crc kubenswrapper[4792]: I1127 18:30:34.799791 4792 generic.go:334] "Generic (PLEG): container finished" podID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerID="798bb866a9946e22171d34502ff39938ea9170a8fa991629efa26efb002c65fa" exitCode=0 Nov 27 18:30:34 crc kubenswrapper[4792]: I1127 18:30:34.799829 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cthvw" event={"ID":"bd5d9601-7b43-4d21-bb98-baf15fde30d2","Type":"ContainerDied","Data":"798bb866a9946e22171d34502ff39938ea9170a8fa991629efa26efb002c65fa"} Nov 27 18:30:34 crc kubenswrapper[4792]: I1127 18:30:34.799853 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cthvw" event={"ID":"bd5d9601-7b43-4d21-bb98-baf15fde30d2","Type":"ContainerStarted","Data":"19422f808a7e4bf1be4b62c1c024490cd7ecadbcd3d95399447a8cf5fcb282b8"} Nov 27 18:30:36 crc kubenswrapper[4792]: I1127 18:30:36.821638 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cthvw" event={"ID":"bd5d9601-7b43-4d21-bb98-baf15fde30d2","Type":"ContainerStarted","Data":"00594e2d03d08387311d916e2a163b0c4868031b3933b14aa733c450f76f6fc3"} Nov 27 18:30:36 crc kubenswrapper[4792]: I1127 18:30:36.824016 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7f56s" event={"ID":"b67038b2-7ac1-4452-afb1-57087ee6d37e","Type":"ContainerStarted","Data":"8b010377fbaebea9aca9febbc5077d1bec2f04ed5be5e4c5232e95bf7c446130"} Nov 27 18:30:39 crc kubenswrapper[4792]: I1127 18:30:39.370724 4792 generic.go:334] "Generic (PLEG): container finished" podID="b67038b2-7ac1-4452-afb1-57087ee6d37e" containerID="8b010377fbaebea9aca9febbc5077d1bec2f04ed5be5e4c5232e95bf7c446130" exitCode=0 Nov 27 18:30:39 crc kubenswrapper[4792]: I1127 18:30:39.370815 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7f56s" event={"ID":"b67038b2-7ac1-4452-afb1-57087ee6d37e","Type":"ContainerDied","Data":"8b010377fbaebea9aca9febbc5077d1bec2f04ed5be5e4c5232e95bf7c446130"} Nov 27 18:30:40 crc kubenswrapper[4792]: I1127 18:30:40.396384 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7f56s" event={"ID":"b67038b2-7ac1-4452-afb1-57087ee6d37e","Type":"ContainerStarted","Data":"12850585139b0f24df4c2d8b433ddd5232f777ea0c32415d7c4e26baef9ffdb3"} Nov 27 18:30:40 crc kubenswrapper[4792]: I1127 18:30:40.506235 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7f56s" podStartSLOduration=3.284380286 podStartE2EDuration="8.421765011s" podCreationTimestamp="2025-11-27 18:30:32 +0000 UTC" firstStartedPulling="2025-11-27 18:30:34.799604979 +0000 UTC m=+4857.142431297" lastFinishedPulling="2025-11-27 18:30:39.936989694 +0000 UTC m=+4862.279816022" observedRunningTime="2025-11-27 18:30:40.417091965 +0000 UTC m=+4862.759918293" watchObservedRunningTime="2025-11-27 18:30:40.421765011 +0000 UTC m=+4862.764591329" Nov 27 18:30:42 crc kubenswrapper[4792]: I1127 18:30:42.597059 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:30:42 crc kubenswrapper[4792]: I1127 18:30:42.597612 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:30:43 crc kubenswrapper[4792]: I1127 18:30:43.670969 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-7f56s" podUID="b67038b2-7ac1-4452-afb1-57087ee6d37e" containerName="registry-server" probeResult="failure" output=< Nov 27 18:30:43 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:30:43 crc kubenswrapper[4792]: > Nov 27 18:30:45 crc kubenswrapper[4792]: I1127 18:30:45.451933 4792 generic.go:334] "Generic (PLEG): container finished" podID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerID="00594e2d03d08387311d916e2a163b0c4868031b3933b14aa733c450f76f6fc3" exitCode=0 Nov 27 18:30:45 crc kubenswrapper[4792]: I1127 18:30:45.452034 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cthvw" event={"ID":"bd5d9601-7b43-4d21-bb98-baf15fde30d2","Type":"ContainerDied","Data":"00594e2d03d08387311d916e2a163b0c4868031b3933b14aa733c450f76f6fc3"} Nov 27 18:30:47 crc kubenswrapper[4792]: I1127 18:30:47.485181 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cthvw" event={"ID":"bd5d9601-7b43-4d21-bb98-baf15fde30d2","Type":"ContainerStarted","Data":"9d6a116a1b07c5aaf20418cad38df18504fb4c154484d59df6682c432b42f1b2"} Nov 27 18:30:47 crc kubenswrapper[4792]: I1127 18:30:47.512973 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cthvw" podStartSLOduration=4.043376259 podStartE2EDuration="15.512944851s" podCreationTimestamp="2025-11-27 18:30:32 +0000 UTC" firstStartedPulling="2025-11-27 18:30:34.801609959 +0000 UTC m=+4857.144436277" lastFinishedPulling="2025-11-27 18:30:46.271178551 +0000 UTC m=+4868.614004869" observedRunningTime="2025-11-27 18:30:47.507190878 +0000 UTC m=+4869.850017206" watchObservedRunningTime="2025-11-27 18:30:47.512944851 +0000 UTC m=+4869.855771169" Nov 27 18:30:52 crc kubenswrapper[4792]: I1127 18:30:52.767492 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:30:52 crc kubenswrapper[4792]: I1127 18:30:52.768050 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:30:53 crc kubenswrapper[4792]: I1127 18:30:53.685334 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-7f56s" podUID="b67038b2-7ac1-4452-afb1-57087ee6d37e" containerName="registry-server" probeResult="failure" output=< Nov 27 18:30:53 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:30:53 crc kubenswrapper[4792]: > Nov 27 18:30:53 crc kubenswrapper[4792]: I1127 18:30:53.818254 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cthvw" podUID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerName="registry-server" probeResult="failure" output=< Nov 27 18:30:53 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:30:53 crc kubenswrapper[4792]: > Nov 27 18:31:02 crc kubenswrapper[4792]: I1127 18:31:02.703819 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:31:02 crc kubenswrapper[4792]: I1127 18:31:02.796382 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:31:02 crc kubenswrapper[4792]: I1127 18:31:02.964263 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7f56s"] Nov 27 18:31:03 crc kubenswrapper[4792]: I1127 18:31:03.832284 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cthvw" podUID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerName="registry-server" probeResult="failure" output=< Nov 27 18:31:03 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:31:03 crc kubenswrapper[4792]: > Nov 27 18:31:04 crc kubenswrapper[4792]: I1127 18:31:04.714035 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7f56s" podUID="b67038b2-7ac1-4452-afb1-57087ee6d37e" containerName="registry-server" containerID="cri-o://12850585139b0f24df4c2d8b433ddd5232f777ea0c32415d7c4e26baef9ffdb3" gracePeriod=2 Nov 27 18:31:05 crc kubenswrapper[4792]: I1127 18:31:05.736957 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7f56s" event={"ID":"b67038b2-7ac1-4452-afb1-57087ee6d37e","Type":"ContainerDied","Data":"12850585139b0f24df4c2d8b433ddd5232f777ea0c32415d7c4e26baef9ffdb3"} Nov 27 18:31:05 crc kubenswrapper[4792]: I1127 18:31:05.737095 4792 generic.go:334] "Generic (PLEG): container finished" podID="b67038b2-7ac1-4452-afb1-57087ee6d37e" containerID="12850585139b0f24df4c2d8b433ddd5232f777ea0c32415d7c4e26baef9ffdb3" exitCode=0 Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.166148 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.228529 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b67038b2-7ac1-4452-afb1-57087ee6d37e-utilities\") pod \"b67038b2-7ac1-4452-afb1-57087ee6d37e\" (UID: \"b67038b2-7ac1-4452-afb1-57087ee6d37e\") " Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.228641 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqdq6\" (UniqueName: \"kubernetes.io/projected/b67038b2-7ac1-4452-afb1-57087ee6d37e-kube-api-access-cqdq6\") pod \"b67038b2-7ac1-4452-afb1-57087ee6d37e\" (UID: \"b67038b2-7ac1-4452-afb1-57087ee6d37e\") " Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.228767 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b67038b2-7ac1-4452-afb1-57087ee6d37e-catalog-content\") pod \"b67038b2-7ac1-4452-afb1-57087ee6d37e\" (UID: \"b67038b2-7ac1-4452-afb1-57087ee6d37e\") " Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.233435 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b67038b2-7ac1-4452-afb1-57087ee6d37e-utilities" (OuterVolumeSpecName: "utilities") pod "b67038b2-7ac1-4452-afb1-57087ee6d37e" (UID: "b67038b2-7ac1-4452-afb1-57087ee6d37e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.270323 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b67038b2-7ac1-4452-afb1-57087ee6d37e-kube-api-access-cqdq6" (OuterVolumeSpecName: "kube-api-access-cqdq6") pod "b67038b2-7ac1-4452-afb1-57087ee6d37e" (UID: "b67038b2-7ac1-4452-afb1-57087ee6d37e"). InnerVolumeSpecName "kube-api-access-cqdq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.282589 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b67038b2-7ac1-4452-afb1-57087ee6d37e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b67038b2-7ac1-4452-afb1-57087ee6d37e" (UID: "b67038b2-7ac1-4452-afb1-57087ee6d37e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.334397 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b67038b2-7ac1-4452-afb1-57087ee6d37e-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.334451 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqdq6\" (UniqueName: \"kubernetes.io/projected/b67038b2-7ac1-4452-afb1-57087ee6d37e-kube-api-access-cqdq6\") on node \"crc\" DevicePath \"\"" Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.334469 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b67038b2-7ac1-4452-afb1-57087ee6d37e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.709701 4792 scope.go:117] "RemoveContainer" containerID="4f16c47ec4be85a9b66c72523126eb1471b70e15c98a81f4630ca9cfc38bd4da" Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.752637 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7f56s" event={"ID":"b67038b2-7ac1-4452-afb1-57087ee6d37e","Type":"ContainerDied","Data":"3839f5d2db84848298e3fdaca4c107c9f698f66751f99a1da8f15be655514235"} Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.752712 4792 scope.go:117] "RemoveContainer" containerID="12850585139b0f24df4c2d8b433ddd5232f777ea0c32415d7c4e26baef9ffdb3" Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.752894 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7f56s" Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.784021 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7f56s"] Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.784255 4792 scope.go:117] "RemoveContainer" containerID="6534cb2584a0fe1d498b7388b428986618bd726ab754c1bebe846cc1ad731382" Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.795900 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7f56s"] Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.810019 4792 scope.go:117] "RemoveContainer" containerID="8b010377fbaebea9aca9febbc5077d1bec2f04ed5be5e4c5232e95bf7c446130" Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.837473 4792 scope.go:117] "RemoveContainer" containerID="18cab8e987c59fad4cfd8065fd46a91a8e4140cebd5d0cc99609d3c26facd3ff" Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.889611 4792 scope.go:117] "RemoveContainer" containerID="a97111cdeb2db6684f6894e37d434e86b27daeab454991eb690a99dcfa170d9f" Nov 27 18:31:06 crc kubenswrapper[4792]: I1127 18:31:06.910279 4792 scope.go:117] "RemoveContainer" containerID="a8ec5a70c72371682fce41e9fcbb4c5ffa55ebc4eefc35df7b90db1cf7fa87ac" Nov 27 18:31:08 crc kubenswrapper[4792]: I1127 18:31:08.701347 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b67038b2-7ac1-4452-afb1-57087ee6d37e" path="/var/lib/kubelet/pods/b67038b2-7ac1-4452-afb1-57087ee6d37e/volumes" Nov 27 18:31:13 crc kubenswrapper[4792]: I1127 18:31:13.843012 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cthvw" podUID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerName="registry-server" probeResult="failure" output=< Nov 27 18:31:13 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:31:13 crc kubenswrapper[4792]: > Nov 27 18:31:23 crc kubenswrapper[4792]: I1127 18:31:23.828211 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cthvw" podUID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerName="registry-server" probeResult="failure" output=< Nov 27 18:31:23 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:31:23 crc kubenswrapper[4792]: > Nov 27 18:31:33 crc kubenswrapper[4792]: I1127 18:31:33.817149 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cthvw" podUID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerName="registry-server" probeResult="failure" output=< Nov 27 18:31:33 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:31:33 crc kubenswrapper[4792]: > Nov 27 18:31:42 crc kubenswrapper[4792]: I1127 18:31:42.834374 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:31:42 crc kubenswrapper[4792]: I1127 18:31:42.897936 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:31:43 crc kubenswrapper[4792]: I1127 18:31:43.042303 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cthvw"] Nov 27 18:31:44 crc kubenswrapper[4792]: I1127 18:31:44.159290 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cthvw" podUID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerName="registry-server" containerID="cri-o://9d6a116a1b07c5aaf20418cad38df18504fb4c154484d59df6682c432b42f1b2" gracePeriod=2 Nov 27 18:31:45 crc kubenswrapper[4792]: I1127 18:31:45.175098 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cthvw" event={"ID":"bd5d9601-7b43-4d21-bb98-baf15fde30d2","Type":"ContainerDied","Data":"9d6a116a1b07c5aaf20418cad38df18504fb4c154484d59df6682c432b42f1b2"} Nov 27 18:31:45 crc kubenswrapper[4792]: I1127 18:31:45.175742 4792 generic.go:334] "Generic (PLEG): container finished" podID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerID="9d6a116a1b07c5aaf20418cad38df18504fb4c154484d59df6682c432b42f1b2" exitCode=0 Nov 27 18:31:45 crc kubenswrapper[4792]: I1127 18:31:45.551954 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:31:45 crc kubenswrapper[4792]: I1127 18:31:45.717814 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c66j9\" (UniqueName: \"kubernetes.io/projected/bd5d9601-7b43-4d21-bb98-baf15fde30d2-kube-api-access-c66j9\") pod \"bd5d9601-7b43-4d21-bb98-baf15fde30d2\" (UID: \"bd5d9601-7b43-4d21-bb98-baf15fde30d2\") " Nov 27 18:31:45 crc kubenswrapper[4792]: I1127 18:31:45.718039 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5d9601-7b43-4d21-bb98-baf15fde30d2-utilities\") pod \"bd5d9601-7b43-4d21-bb98-baf15fde30d2\" (UID: \"bd5d9601-7b43-4d21-bb98-baf15fde30d2\") " Nov 27 18:31:45 crc kubenswrapper[4792]: I1127 18:31:45.718211 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5d9601-7b43-4d21-bb98-baf15fde30d2-catalog-content\") pod \"bd5d9601-7b43-4d21-bb98-baf15fde30d2\" (UID: \"bd5d9601-7b43-4d21-bb98-baf15fde30d2\") " Nov 27 18:31:45 crc kubenswrapper[4792]: I1127 18:31:45.723681 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd5d9601-7b43-4d21-bb98-baf15fde30d2-utilities" (OuterVolumeSpecName: "utilities") pod "bd5d9601-7b43-4d21-bb98-baf15fde30d2" (UID: "bd5d9601-7b43-4d21-bb98-baf15fde30d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:31:45 crc kubenswrapper[4792]: I1127 18:31:45.745185 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5d9601-7b43-4d21-bb98-baf15fde30d2-kube-api-access-c66j9" (OuterVolumeSpecName: "kube-api-access-c66j9") pod "bd5d9601-7b43-4d21-bb98-baf15fde30d2" (UID: "bd5d9601-7b43-4d21-bb98-baf15fde30d2"). InnerVolumeSpecName "kube-api-access-c66j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:31:45 crc kubenswrapper[4792]: I1127 18:31:45.829221 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c66j9\" (UniqueName: \"kubernetes.io/projected/bd5d9601-7b43-4d21-bb98-baf15fde30d2-kube-api-access-c66j9\") on node \"crc\" DevicePath \"\"" Nov 27 18:31:45 crc kubenswrapper[4792]: I1127 18:31:45.829268 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5d9601-7b43-4d21-bb98-baf15fde30d2-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:31:45 crc kubenswrapper[4792]: I1127 18:31:45.919690 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd5d9601-7b43-4d21-bb98-baf15fde30d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd5d9601-7b43-4d21-bb98-baf15fde30d2" (UID: "bd5d9601-7b43-4d21-bb98-baf15fde30d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:31:45 crc kubenswrapper[4792]: I1127 18:31:45.933036 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5d9601-7b43-4d21-bb98-baf15fde30d2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:31:46 crc kubenswrapper[4792]: I1127 18:31:46.192431 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cthvw" event={"ID":"bd5d9601-7b43-4d21-bb98-baf15fde30d2","Type":"ContainerDied","Data":"19422f808a7e4bf1be4b62c1c024490cd7ecadbcd3d95399447a8cf5fcb282b8"} Nov 27 18:31:46 crc kubenswrapper[4792]: I1127 18:31:46.192552 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cthvw" Nov 27 18:31:46 crc kubenswrapper[4792]: I1127 18:31:46.194220 4792 scope.go:117] "RemoveContainer" containerID="9d6a116a1b07c5aaf20418cad38df18504fb4c154484d59df6682c432b42f1b2" Nov 27 18:31:46 crc kubenswrapper[4792]: I1127 18:31:46.257510 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cthvw"] Nov 27 18:31:46 crc kubenswrapper[4792]: I1127 18:31:46.274945 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cthvw"] Nov 27 18:31:46 crc kubenswrapper[4792]: I1127 18:31:46.291452 4792 scope.go:117] "RemoveContainer" containerID="00594e2d03d08387311d916e2a163b0c4868031b3933b14aa733c450f76f6fc3" Nov 27 18:31:46 crc kubenswrapper[4792]: I1127 18:31:46.331898 4792 scope.go:117] "RemoveContainer" containerID="798bb866a9946e22171d34502ff39938ea9170a8fa991629efa26efb002c65fa" Nov 27 18:31:46 crc kubenswrapper[4792]: I1127 18:31:46.703347 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" path="/var/lib/kubelet/pods/bd5d9601-7b43-4d21-bb98-baf15fde30d2/volumes" Nov 27 18:32:08 crc kubenswrapper[4792]: I1127 18:32:08.290081 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:32:08 crc kubenswrapper[4792]: I1127 18:32:08.290684 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:32:38 crc kubenswrapper[4792]: I1127 18:32:38.290277 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:32:38 crc kubenswrapper[4792]: I1127 18:32:38.290869 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:33:08 crc kubenswrapper[4792]: I1127 18:33:08.290746 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:33:08 crc kubenswrapper[4792]: I1127 18:33:08.291335 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:33:08 crc kubenswrapper[4792]: I1127 18:33:08.291392 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 18:33:08 crc kubenswrapper[4792]: I1127 18:33:08.292401 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 18:33:08 crc kubenswrapper[4792]: I1127 18:33:08.292467 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" gracePeriod=600 Nov 27 18:33:08 crc kubenswrapper[4792]: E1127 18:33:08.415607 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:33:09 crc kubenswrapper[4792]: I1127 18:33:09.137820 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" exitCode=0 Nov 27 18:33:09 crc kubenswrapper[4792]: I1127 18:33:09.137868 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187"} Nov 27 18:33:09 crc kubenswrapper[4792]: I1127 18:33:09.138173 4792 scope.go:117] "RemoveContainer" containerID="788d9f3308639f1eb59ae24cebd2145498b437261ae5691516ec56d907d9778f" Nov 27 18:33:09 crc kubenswrapper[4792]: I1127 18:33:09.138992 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:33:09 crc kubenswrapper[4792]: E1127 18:33:09.139606 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:33:19 crc kubenswrapper[4792]: I1127 18:33:19.686904 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:33:19 crc kubenswrapper[4792]: E1127 18:33:19.687688 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:33:32 crc kubenswrapper[4792]: I1127 18:33:32.693170 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:33:32 crc kubenswrapper[4792]: E1127 18:33:32.694019 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:33:46 crc kubenswrapper[4792]: I1127 18:33:46.687523 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:33:46 crc kubenswrapper[4792]: E1127 18:33:46.688550 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:34:00 crc kubenswrapper[4792]: I1127 18:34:00.687957 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:34:00 crc kubenswrapper[4792]: E1127 18:34:00.688734 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:34:12 crc kubenswrapper[4792]: I1127 18:34:12.687459 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:34:12 crc kubenswrapper[4792]: E1127 18:34:12.688778 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:34:27 crc kubenswrapper[4792]: I1127 18:34:27.688157 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:34:27 crc kubenswrapper[4792]: E1127 18:34:27.689141 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:34:42 crc kubenswrapper[4792]: I1127 18:34:42.688279 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:34:42 crc kubenswrapper[4792]: E1127 18:34:42.689378 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:34:53 crc kubenswrapper[4792]: I1127 18:34:53.687505 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:34:53 crc kubenswrapper[4792]: E1127 18:34:53.688605 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.681042 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kzg8n"] Nov 27 18:35:06 crc kubenswrapper[4792]: E1127 18:35:06.684467 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerName="extract-utilities" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.684585 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerName="extract-utilities" Nov 27 18:35:06 crc kubenswrapper[4792]: E1127 18:35:06.684747 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67038b2-7ac1-4452-afb1-57087ee6d37e" containerName="extract-utilities" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.684854 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67038b2-7ac1-4452-afb1-57087ee6d37e" containerName="extract-utilities" Nov 27 18:35:06 crc kubenswrapper[4792]: E1127 18:35:06.684938 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerName="extract-content" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.684997 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerName="extract-content" Nov 27 18:35:06 crc kubenswrapper[4792]: E1127 18:35:06.685059 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67038b2-7ac1-4452-afb1-57087ee6d37e" containerName="extract-content" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.685114 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67038b2-7ac1-4452-afb1-57087ee6d37e" containerName="extract-content" Nov 27 18:35:06 crc kubenswrapper[4792]: E1127 18:35:06.685175 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67038b2-7ac1-4452-afb1-57087ee6d37e" containerName="registry-server" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.685758 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67038b2-7ac1-4452-afb1-57087ee6d37e" containerName="registry-server" Nov 27 18:35:06 crc kubenswrapper[4792]: E1127 18:35:06.685940 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerName="registry-server" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.685949 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerName="registry-server" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.688909 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b67038b2-7ac1-4452-afb1-57087ee6d37e" containerName="registry-server" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.689041 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5d9601-7b43-4d21-bb98-baf15fde30d2" containerName="registry-server" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.693964 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.763455 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kzg8n"] Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.877261 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdvc7\" (UniqueName: \"kubernetes.io/projected/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-kube-api-access-cdvc7\") pod \"community-operators-kzg8n\" (UID: \"0e754c72-6b1a-47d0-8cce-13e0cbd72d26\") " pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.877368 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-utilities\") pod \"community-operators-kzg8n\" (UID: \"0e754c72-6b1a-47d0-8cce-13e0cbd72d26\") " pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.877529 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-catalog-content\") pod \"community-operators-kzg8n\" (UID: \"0e754c72-6b1a-47d0-8cce-13e0cbd72d26\") " pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.980043 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-utilities\") pod \"community-operators-kzg8n\" (UID: \"0e754c72-6b1a-47d0-8cce-13e0cbd72d26\") " pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.980078 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdvc7\" (UniqueName: \"kubernetes.io/projected/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-kube-api-access-cdvc7\") pod \"community-operators-kzg8n\" (UID: \"0e754c72-6b1a-47d0-8cce-13e0cbd72d26\") " pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.980178 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-catalog-content\") pod \"community-operators-kzg8n\" (UID: \"0e754c72-6b1a-47d0-8cce-13e0cbd72d26\") " pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.980505 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-utilities\") pod \"community-operators-kzg8n\" (UID: \"0e754c72-6b1a-47d0-8cce-13e0cbd72d26\") " pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:06 crc kubenswrapper[4792]: I1127 18:35:06.980618 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-catalog-content\") pod \"community-operators-kzg8n\" (UID: \"0e754c72-6b1a-47d0-8cce-13e0cbd72d26\") " pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:07 crc kubenswrapper[4792]: I1127 18:35:07.011223 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdvc7\" (UniqueName: \"kubernetes.io/projected/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-kube-api-access-cdvc7\") pod \"community-operators-kzg8n\" (UID: \"0e754c72-6b1a-47d0-8cce-13e0cbd72d26\") " pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:07 crc kubenswrapper[4792]: I1127 18:35:07.023590 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:07 crc kubenswrapper[4792]: I1127 18:35:07.705366 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kzg8n"] Nov 27 18:35:08 crc kubenswrapper[4792]: I1127 18:35:08.612421 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzg8n" event={"ID":"0e754c72-6b1a-47d0-8cce-13e0cbd72d26","Type":"ContainerStarted","Data":"066b1c537d71a2ef0f884288de40c8affb84a9608296990052e011d2e0999bf2"} Nov 27 18:35:08 crc kubenswrapper[4792]: I1127 18:35:08.613014 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzg8n" event={"ID":"0e754c72-6b1a-47d0-8cce-13e0cbd72d26","Type":"ContainerStarted","Data":"6526a856a4619ed1f97a1b8d0aeb9565417c5fc08654488f064ee2fbda8b9ff4"} Nov 27 18:35:08 crc kubenswrapper[4792]: I1127 18:35:08.703550 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:35:08 crc kubenswrapper[4792]: E1127 18:35:08.703963 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:35:09 crc kubenswrapper[4792]: I1127 18:35:09.625267 4792 generic.go:334] "Generic (PLEG): container finished" podID="0e754c72-6b1a-47d0-8cce-13e0cbd72d26" containerID="066b1c537d71a2ef0f884288de40c8affb84a9608296990052e011d2e0999bf2" exitCode=0 Nov 27 18:35:09 crc kubenswrapper[4792]: I1127 18:35:09.625340 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzg8n" event={"ID":"0e754c72-6b1a-47d0-8cce-13e0cbd72d26","Type":"ContainerDied","Data":"066b1c537d71a2ef0f884288de40c8affb84a9608296990052e011d2e0999bf2"} Nov 27 18:35:09 crc kubenswrapper[4792]: I1127 18:35:09.631352 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 18:35:11 crc kubenswrapper[4792]: I1127 18:35:11.646258 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzg8n" event={"ID":"0e754c72-6b1a-47d0-8cce-13e0cbd72d26","Type":"ContainerStarted","Data":"8dcf01a2a071d38aa4d9f39b1a83a2d022dd90b1a2bbad9ba018f18d0b8c6020"} Nov 27 18:35:13 crc kubenswrapper[4792]: I1127 18:35:13.670181 4792 generic.go:334] "Generic (PLEG): container finished" podID="0e754c72-6b1a-47d0-8cce-13e0cbd72d26" containerID="8dcf01a2a071d38aa4d9f39b1a83a2d022dd90b1a2bbad9ba018f18d0b8c6020" exitCode=0 Nov 27 18:35:13 crc kubenswrapper[4792]: I1127 18:35:13.670270 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzg8n" event={"ID":"0e754c72-6b1a-47d0-8cce-13e0cbd72d26","Type":"ContainerDied","Data":"8dcf01a2a071d38aa4d9f39b1a83a2d022dd90b1a2bbad9ba018f18d0b8c6020"} Nov 27 18:35:14 crc kubenswrapper[4792]: I1127 18:35:14.682612 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzg8n" event={"ID":"0e754c72-6b1a-47d0-8cce-13e0cbd72d26","Type":"ContainerStarted","Data":"41023a35e3ca9066585bb66afcbd8f69da099ea2ebebc235137123fbfac9e020"} Nov 27 18:35:14 crc kubenswrapper[4792]: I1127 18:35:14.709393 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kzg8n" podStartSLOduration=4.095806019 podStartE2EDuration="8.70837127s" podCreationTimestamp="2025-11-27 18:35:06 +0000 UTC" firstStartedPulling="2025-11-27 18:35:09.62788322 +0000 UTC m=+5131.970709538" lastFinishedPulling="2025-11-27 18:35:14.240448471 +0000 UTC m=+5136.583274789" observedRunningTime="2025-11-27 18:35:14.702068623 +0000 UTC m=+5137.044894951" watchObservedRunningTime="2025-11-27 18:35:14.70837127 +0000 UTC m=+5137.051197588" Nov 27 18:35:17 crc kubenswrapper[4792]: I1127 18:35:17.023843 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:17 crc kubenswrapper[4792]: I1127 18:35:17.024496 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:18 crc kubenswrapper[4792]: I1127 18:35:18.073831 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-kzg8n" podUID="0e754c72-6b1a-47d0-8cce-13e0cbd72d26" containerName="registry-server" probeResult="failure" output=< Nov 27 18:35:18 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:35:18 crc kubenswrapper[4792]: > Nov 27 18:35:21 crc kubenswrapper[4792]: I1127 18:35:21.686999 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:35:21 crc kubenswrapper[4792]: E1127 18:35:21.687867 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:35:28 crc kubenswrapper[4792]: I1127 18:35:28.072656 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-kzg8n" podUID="0e754c72-6b1a-47d0-8cce-13e0cbd72d26" containerName="registry-server" probeResult="failure" output=< Nov 27 18:35:28 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:35:28 crc kubenswrapper[4792]: > Nov 27 18:35:36 crc kubenswrapper[4792]: I1127 18:35:36.687549 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:35:36 crc kubenswrapper[4792]: E1127 18:35:36.689507 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:35:37 crc kubenswrapper[4792]: I1127 18:35:37.817071 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:37 crc kubenswrapper[4792]: I1127 18:35:37.871548 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:38 crc kubenswrapper[4792]: I1127 18:35:38.056763 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kzg8n"] Nov 27 18:35:39 crc kubenswrapper[4792]: I1127 18:35:39.021481 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kzg8n" podUID="0e754c72-6b1a-47d0-8cce-13e0cbd72d26" containerName="registry-server" containerID="cri-o://41023a35e3ca9066585bb66afcbd8f69da099ea2ebebc235137123fbfac9e020" gracePeriod=2 Nov 27 18:35:40 crc kubenswrapper[4792]: I1127 18:35:40.033912 4792 generic.go:334] "Generic (PLEG): container finished" podID="0e754c72-6b1a-47d0-8cce-13e0cbd72d26" containerID="41023a35e3ca9066585bb66afcbd8f69da099ea2ebebc235137123fbfac9e020" exitCode=0 Nov 27 18:35:40 crc kubenswrapper[4792]: I1127 18:35:40.033982 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzg8n" event={"ID":"0e754c72-6b1a-47d0-8cce-13e0cbd72d26","Type":"ContainerDied","Data":"41023a35e3ca9066585bb66afcbd8f69da099ea2ebebc235137123fbfac9e020"} Nov 27 18:35:40 crc kubenswrapper[4792]: I1127 18:35:40.035372 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kzg8n" event={"ID":"0e754c72-6b1a-47d0-8cce-13e0cbd72d26","Type":"ContainerDied","Data":"6526a856a4619ed1f97a1b8d0aeb9565417c5fc08654488f064ee2fbda8b9ff4"} Nov 27 18:35:40 crc kubenswrapper[4792]: I1127 18:35:40.035906 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6526a856a4619ed1f97a1b8d0aeb9565417c5fc08654488f064ee2fbda8b9ff4" Nov 27 18:35:40 crc kubenswrapper[4792]: I1127 18:35:40.343864 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:40 crc kubenswrapper[4792]: I1127 18:35:40.540167 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-utilities\") pod \"0e754c72-6b1a-47d0-8cce-13e0cbd72d26\" (UID: \"0e754c72-6b1a-47d0-8cce-13e0cbd72d26\") " Nov 27 18:35:40 crc kubenswrapper[4792]: I1127 18:35:40.540245 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-catalog-content\") pod \"0e754c72-6b1a-47d0-8cce-13e0cbd72d26\" (UID: \"0e754c72-6b1a-47d0-8cce-13e0cbd72d26\") " Nov 27 18:35:40 crc kubenswrapper[4792]: I1127 18:35:40.540422 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdvc7\" (UniqueName: \"kubernetes.io/projected/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-kube-api-access-cdvc7\") pod \"0e754c72-6b1a-47d0-8cce-13e0cbd72d26\" (UID: \"0e754c72-6b1a-47d0-8cce-13e0cbd72d26\") " Nov 27 18:35:40 crc kubenswrapper[4792]: I1127 18:35:40.542202 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-utilities" (OuterVolumeSpecName: "utilities") pod "0e754c72-6b1a-47d0-8cce-13e0cbd72d26" (UID: "0e754c72-6b1a-47d0-8cce-13e0cbd72d26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:35:40 crc kubenswrapper[4792]: I1127 18:35:40.563679 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-kube-api-access-cdvc7" (OuterVolumeSpecName: "kube-api-access-cdvc7") pod "0e754c72-6b1a-47d0-8cce-13e0cbd72d26" (UID: "0e754c72-6b1a-47d0-8cce-13e0cbd72d26"). InnerVolumeSpecName "kube-api-access-cdvc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:35:40 crc kubenswrapper[4792]: I1127 18:35:40.610593 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e754c72-6b1a-47d0-8cce-13e0cbd72d26" (UID: "0e754c72-6b1a-47d0-8cce-13e0cbd72d26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:35:40 crc kubenswrapper[4792]: I1127 18:35:40.642736 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdvc7\" (UniqueName: \"kubernetes.io/projected/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-kube-api-access-cdvc7\") on node \"crc\" DevicePath \"\"" Nov 27 18:35:40 crc kubenswrapper[4792]: I1127 18:35:40.642767 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:35:40 crc kubenswrapper[4792]: I1127 18:35:40.642777 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e754c72-6b1a-47d0-8cce-13e0cbd72d26-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:35:41 crc kubenswrapper[4792]: I1127 18:35:41.044792 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kzg8n" Nov 27 18:35:41 crc kubenswrapper[4792]: I1127 18:35:41.076290 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kzg8n"] Nov 27 18:35:41 crc kubenswrapper[4792]: I1127 18:35:41.088185 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kzg8n"] Nov 27 18:35:42 crc kubenswrapper[4792]: I1127 18:35:42.702236 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e754c72-6b1a-47d0-8cce-13e0cbd72d26" path="/var/lib/kubelet/pods/0e754c72-6b1a-47d0-8cce-13e0cbd72d26/volumes" Nov 27 18:35:49 crc kubenswrapper[4792]: I1127 18:35:49.687518 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:35:49 crc kubenswrapper[4792]: E1127 18:35:49.688279 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:36:02 crc kubenswrapper[4792]: I1127 18:36:02.689309 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:36:02 crc kubenswrapper[4792]: E1127 18:36:02.693460 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:36:17 crc kubenswrapper[4792]: I1127 18:36:17.687318 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:36:17 crc kubenswrapper[4792]: E1127 18:36:17.688149 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:36:32 crc kubenswrapper[4792]: I1127 18:36:32.687433 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:36:32 crc kubenswrapper[4792]: E1127 18:36:32.688492 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:36:47 crc kubenswrapper[4792]: I1127 18:36:47.686492 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:36:47 crc kubenswrapper[4792]: E1127 18:36:47.687426 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:37:02 crc kubenswrapper[4792]: I1127 18:37:02.686779 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:37:02 crc kubenswrapper[4792]: E1127 18:37:02.687991 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:37:16 crc kubenswrapper[4792]: I1127 18:37:16.687692 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:37:16 crc kubenswrapper[4792]: E1127 18:37:16.688382 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.361951 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tnp74"] Nov 27 18:37:30 crc kubenswrapper[4792]: E1127 18:37:30.363078 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e754c72-6b1a-47d0-8cce-13e0cbd72d26" containerName="extract-content" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.363095 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e754c72-6b1a-47d0-8cce-13e0cbd72d26" containerName="extract-content" Nov 27 18:37:30 crc kubenswrapper[4792]: E1127 18:37:30.363125 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e754c72-6b1a-47d0-8cce-13e0cbd72d26" containerName="registry-server" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.363133 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e754c72-6b1a-47d0-8cce-13e0cbd72d26" containerName="registry-server" Nov 27 18:37:30 crc kubenswrapper[4792]: E1127 18:37:30.363182 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e754c72-6b1a-47d0-8cce-13e0cbd72d26" containerName="extract-utilities" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.363191 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e754c72-6b1a-47d0-8cce-13e0cbd72d26" containerName="extract-utilities" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.363503 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e754c72-6b1a-47d0-8cce-13e0cbd72d26" containerName="registry-server" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.366705 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.377926 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnp74"] Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.555443 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-utilities\") pod \"certified-operators-tnp74\" (UID: \"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa\") " pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.555558 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-catalog-content\") pod \"certified-operators-tnp74\" (UID: \"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa\") " pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.555616 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5657\" (UniqueName: \"kubernetes.io/projected/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-kube-api-access-q5657\") pod \"certified-operators-tnp74\" (UID: \"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa\") " pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.658355 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5657\" (UniqueName: \"kubernetes.io/projected/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-kube-api-access-q5657\") pod \"certified-operators-tnp74\" (UID: \"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa\") " pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.658807 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-utilities\") pod \"certified-operators-tnp74\" (UID: \"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa\") " pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.659042 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-catalog-content\") pod \"certified-operators-tnp74\" (UID: \"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa\") " pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.659566 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-utilities\") pod \"certified-operators-tnp74\" (UID: \"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa\") " pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.660152 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-catalog-content\") pod \"certified-operators-tnp74\" (UID: \"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa\") " pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.679616 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5657\" (UniqueName: \"kubernetes.io/projected/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-kube-api-access-q5657\") pod \"certified-operators-tnp74\" (UID: \"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa\") " pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.689382 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:37:30 crc kubenswrapper[4792]: E1127 18:37:30.690070 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:37:30 crc kubenswrapper[4792]: I1127 18:37:30.736204 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:31 crc kubenswrapper[4792]: W1127 18:37:31.223514 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode16ecbc0_d2fd_4cfa_a92b_d4ac630a6cfa.slice/crio-8efbb2bba1de762faaa2435a6faf2af2fa39edc27bac70c3f58fcf40380ed713 WatchSource:0}: Error finding container 8efbb2bba1de762faaa2435a6faf2af2fa39edc27bac70c3f58fcf40380ed713: Status 404 returned error can't find the container with id 8efbb2bba1de762faaa2435a6faf2af2fa39edc27bac70c3f58fcf40380ed713 Nov 27 18:37:31 crc kubenswrapper[4792]: I1127 18:37:31.228243 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnp74"] Nov 27 18:37:32 crc kubenswrapper[4792]: I1127 18:37:32.237345 4792 generic.go:334] "Generic (PLEG): container finished" podID="e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa" containerID="dfe184b69a51e5d02c5c4e346604dd99a1cc07f2632db3918380a0f607933796" exitCode=0 Nov 27 18:37:32 crc kubenswrapper[4792]: I1127 18:37:32.237431 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnp74" event={"ID":"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa","Type":"ContainerDied","Data":"dfe184b69a51e5d02c5c4e346604dd99a1cc07f2632db3918380a0f607933796"} Nov 27 18:37:32 crc kubenswrapper[4792]: I1127 18:37:32.237660 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnp74" event={"ID":"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa","Type":"ContainerStarted","Data":"8efbb2bba1de762faaa2435a6faf2af2fa39edc27bac70c3f58fcf40380ed713"} Nov 27 18:37:34 crc kubenswrapper[4792]: I1127 18:37:34.262537 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnp74" event={"ID":"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa","Type":"ContainerStarted","Data":"c05dbdc5ff69c0c9f40188f601ea64f91fb7c1c55b9e6dfa2eb97440c80cdc30"} Nov 27 18:37:35 crc kubenswrapper[4792]: I1127 18:37:35.279724 4792 generic.go:334] "Generic (PLEG): container finished" podID="e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa" containerID="c05dbdc5ff69c0c9f40188f601ea64f91fb7c1c55b9e6dfa2eb97440c80cdc30" exitCode=0 Nov 27 18:37:35 crc kubenswrapper[4792]: I1127 18:37:35.279822 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnp74" event={"ID":"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa","Type":"ContainerDied","Data":"c05dbdc5ff69c0c9f40188f601ea64f91fb7c1c55b9e6dfa2eb97440c80cdc30"} Nov 27 18:37:36 crc kubenswrapper[4792]: I1127 18:37:36.294374 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnp74" event={"ID":"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa","Type":"ContainerStarted","Data":"799dc7ecb7a788f8174cda52abe8dcc1da146068fd1c81972e4ceb15d245002a"} Nov 27 18:37:36 crc kubenswrapper[4792]: I1127 18:37:36.320864 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tnp74" podStartSLOduration=2.867493642 podStartE2EDuration="6.320838324s" podCreationTimestamp="2025-11-27 18:37:30 +0000 UTC" firstStartedPulling="2025-11-27 18:37:32.240608282 +0000 UTC m=+5274.583434600" lastFinishedPulling="2025-11-27 18:37:35.693952954 +0000 UTC m=+5278.036779282" observedRunningTime="2025-11-27 18:37:36.312582839 +0000 UTC m=+5278.655409157" watchObservedRunningTime="2025-11-27 18:37:36.320838324 +0000 UTC m=+5278.663664652" Nov 27 18:37:40 crc kubenswrapper[4792]: I1127 18:37:40.736501 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:40 crc kubenswrapper[4792]: I1127 18:37:40.737053 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:40 crc kubenswrapper[4792]: I1127 18:37:40.784052 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:41 crc kubenswrapper[4792]: I1127 18:37:41.389298 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:41 crc kubenswrapper[4792]: I1127 18:37:41.439358 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnp74"] Nov 27 18:37:42 crc kubenswrapper[4792]: I1127 18:37:42.687125 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:37:42 crc kubenswrapper[4792]: E1127 18:37:42.687710 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:37:43 crc kubenswrapper[4792]: I1127 18:37:43.360267 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tnp74" podUID="e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa" containerName="registry-server" containerID="cri-o://799dc7ecb7a788f8174cda52abe8dcc1da146068fd1c81972e4ceb15d245002a" gracePeriod=2 Nov 27 18:37:43 crc kubenswrapper[4792]: I1127 18:37:43.927830 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.122111 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-utilities\") pod \"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa\" (UID: \"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa\") " Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.122941 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-utilities" (OuterVolumeSpecName: "utilities") pod "e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa" (UID: "e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.123242 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5657\" (UniqueName: \"kubernetes.io/projected/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-kube-api-access-q5657\") pod \"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa\" (UID: \"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa\") " Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.123430 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-catalog-content\") pod \"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa\" (UID: \"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa\") " Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.124680 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.129500 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-kube-api-access-q5657" (OuterVolumeSpecName: "kube-api-access-q5657") pod "e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa" (UID: "e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa"). InnerVolumeSpecName "kube-api-access-q5657". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.179756 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa" (UID: "e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.226888 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.226927 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5657\" (UniqueName: \"kubernetes.io/projected/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa-kube-api-access-q5657\") on node \"crc\" DevicePath \"\"" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.373304 4792 generic.go:334] "Generic (PLEG): container finished" podID="e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa" containerID="799dc7ecb7a788f8174cda52abe8dcc1da146068fd1c81972e4ceb15d245002a" exitCode=0 Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.373354 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnp74" event={"ID":"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa","Type":"ContainerDied","Data":"799dc7ecb7a788f8174cda52abe8dcc1da146068fd1c81972e4ceb15d245002a"} Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.373390 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnp74" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.373412 4792 scope.go:117] "RemoveContainer" containerID="799dc7ecb7a788f8174cda52abe8dcc1da146068fd1c81972e4ceb15d245002a" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.373398 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnp74" event={"ID":"e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa","Type":"ContainerDied","Data":"8efbb2bba1de762faaa2435a6faf2af2fa39edc27bac70c3f58fcf40380ed713"} Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.408045 4792 scope.go:117] "RemoveContainer" containerID="c05dbdc5ff69c0c9f40188f601ea64f91fb7c1c55b9e6dfa2eb97440c80cdc30" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.410588 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnp74"] Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.422533 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tnp74"] Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.435481 4792 scope.go:117] "RemoveContainer" containerID="dfe184b69a51e5d02c5c4e346604dd99a1cc07f2632db3918380a0f607933796" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.491393 4792 scope.go:117] "RemoveContainer" containerID="799dc7ecb7a788f8174cda52abe8dcc1da146068fd1c81972e4ceb15d245002a" Nov 27 18:37:44 crc kubenswrapper[4792]: E1127 18:37:44.492786 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"799dc7ecb7a788f8174cda52abe8dcc1da146068fd1c81972e4ceb15d245002a\": container with ID starting with 799dc7ecb7a788f8174cda52abe8dcc1da146068fd1c81972e4ceb15d245002a not found: ID does not exist" containerID="799dc7ecb7a788f8174cda52abe8dcc1da146068fd1c81972e4ceb15d245002a" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.492831 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"799dc7ecb7a788f8174cda52abe8dcc1da146068fd1c81972e4ceb15d245002a"} err="failed to get container status \"799dc7ecb7a788f8174cda52abe8dcc1da146068fd1c81972e4ceb15d245002a\": rpc error: code = NotFound desc = could not find container \"799dc7ecb7a788f8174cda52abe8dcc1da146068fd1c81972e4ceb15d245002a\": container with ID starting with 799dc7ecb7a788f8174cda52abe8dcc1da146068fd1c81972e4ceb15d245002a not found: ID does not exist" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.492859 4792 scope.go:117] "RemoveContainer" containerID="c05dbdc5ff69c0c9f40188f601ea64f91fb7c1c55b9e6dfa2eb97440c80cdc30" Nov 27 18:37:44 crc kubenswrapper[4792]: E1127 18:37:44.493522 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c05dbdc5ff69c0c9f40188f601ea64f91fb7c1c55b9e6dfa2eb97440c80cdc30\": container with ID starting with c05dbdc5ff69c0c9f40188f601ea64f91fb7c1c55b9e6dfa2eb97440c80cdc30 not found: ID does not exist" containerID="c05dbdc5ff69c0c9f40188f601ea64f91fb7c1c55b9e6dfa2eb97440c80cdc30" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.493561 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c05dbdc5ff69c0c9f40188f601ea64f91fb7c1c55b9e6dfa2eb97440c80cdc30"} err="failed to get container status \"c05dbdc5ff69c0c9f40188f601ea64f91fb7c1c55b9e6dfa2eb97440c80cdc30\": rpc error: code = NotFound desc = could not find container \"c05dbdc5ff69c0c9f40188f601ea64f91fb7c1c55b9e6dfa2eb97440c80cdc30\": container with ID starting with c05dbdc5ff69c0c9f40188f601ea64f91fb7c1c55b9e6dfa2eb97440c80cdc30 not found: ID does not exist" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.493588 4792 scope.go:117] "RemoveContainer" containerID="dfe184b69a51e5d02c5c4e346604dd99a1cc07f2632db3918380a0f607933796" Nov 27 18:37:44 crc kubenswrapper[4792]: E1127 18:37:44.493979 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfe184b69a51e5d02c5c4e346604dd99a1cc07f2632db3918380a0f607933796\": container with ID starting with dfe184b69a51e5d02c5c4e346604dd99a1cc07f2632db3918380a0f607933796 not found: ID does not exist" containerID="dfe184b69a51e5d02c5c4e346604dd99a1cc07f2632db3918380a0f607933796" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.494011 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe184b69a51e5d02c5c4e346604dd99a1cc07f2632db3918380a0f607933796"} err="failed to get container status \"dfe184b69a51e5d02c5c4e346604dd99a1cc07f2632db3918380a0f607933796\": rpc error: code = NotFound desc = could not find container \"dfe184b69a51e5d02c5c4e346604dd99a1cc07f2632db3918380a0f607933796\": container with ID starting with dfe184b69a51e5d02c5c4e346604dd99a1cc07f2632db3918380a0f607933796 not found: ID does not exist" Nov 27 18:37:44 crc kubenswrapper[4792]: I1127 18:37:44.699518 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa" path="/var/lib/kubelet/pods/e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa/volumes" Nov 27 18:37:54 crc kubenswrapper[4792]: I1127 18:37:54.686990 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:37:54 crc kubenswrapper[4792]: E1127 18:37:54.688085 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:38:08 crc kubenswrapper[4792]: I1127 18:38:08.694549 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:38:09 crc kubenswrapper[4792]: I1127 18:38:09.652050 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"08f43da188f335c21767911cf49a6a85da6f4da4668f81f41b0b0ddc255b4613"} Nov 27 18:40:08 crc kubenswrapper[4792]: I1127 18:40:08.290150 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:40:08 crc kubenswrapper[4792]: I1127 18:40:08.290792 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:40:38 crc kubenswrapper[4792]: I1127 18:40:38.290698 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:40:38 crc kubenswrapper[4792]: I1127 18:40:38.291534 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:41:08 crc kubenswrapper[4792]: I1127 18:41:08.289800 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:41:08 crc kubenswrapper[4792]: I1127 18:41:08.290258 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:41:08 crc kubenswrapper[4792]: I1127 18:41:08.290305 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 18:41:08 crc kubenswrapper[4792]: I1127 18:41:08.291308 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08f43da188f335c21767911cf49a6a85da6f4da4668f81f41b0b0ddc255b4613"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 18:41:08 crc kubenswrapper[4792]: I1127 18:41:08.291358 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://08f43da188f335c21767911cf49a6a85da6f4da4668f81f41b0b0ddc255b4613" gracePeriod=600 Nov 27 18:41:08 crc kubenswrapper[4792]: I1127 18:41:08.764088 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="08f43da188f335c21767911cf49a6a85da6f4da4668f81f41b0b0ddc255b4613" exitCode=0 Nov 27 18:41:08 crc kubenswrapper[4792]: I1127 18:41:08.764457 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"08f43da188f335c21767911cf49a6a85da6f4da4668f81f41b0b0ddc255b4613"} Nov 27 18:41:08 crc kubenswrapper[4792]: I1127 18:41:08.764491 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6"} Nov 27 18:41:08 crc kubenswrapper[4792]: I1127 18:41:08.764513 4792 scope.go:117] "RemoveContainer" containerID="62a93de30e648bbaccc97d96fce037f87c514e881910e8573ff16977082be187" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.137627 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rcwhn"] Nov 27 18:41:24 crc kubenswrapper[4792]: E1127 18:41:24.138587 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa" containerName="extract-utilities" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.138607 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa" containerName="extract-utilities" Nov 27 18:41:24 crc kubenswrapper[4792]: E1127 18:41:24.138628 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa" containerName="registry-server" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.138638 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa" containerName="registry-server" Nov 27 18:41:24 crc kubenswrapper[4792]: E1127 18:41:24.138747 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa" containerName="extract-content" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.138758 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa" containerName="extract-content" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.139063 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16ecbc0-d2fd-4cfa-a92b-d4ac630a6cfa" containerName="registry-server" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.141106 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.151445 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rcwhn"] Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.250845 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9419e55-bd03-4fa7-a94c-9c27fe3264da-catalog-content\") pod \"redhat-operators-rcwhn\" (UID: \"a9419e55-bd03-4fa7-a94c-9c27fe3264da\") " pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.250921 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2wsf\" (UniqueName: \"kubernetes.io/projected/a9419e55-bd03-4fa7-a94c-9c27fe3264da-kube-api-access-t2wsf\") pod \"redhat-operators-rcwhn\" (UID: \"a9419e55-bd03-4fa7-a94c-9c27fe3264da\") " pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.250967 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9419e55-bd03-4fa7-a94c-9c27fe3264da-utilities\") pod \"redhat-operators-rcwhn\" (UID: \"a9419e55-bd03-4fa7-a94c-9c27fe3264da\") " pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.361331 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9419e55-bd03-4fa7-a94c-9c27fe3264da-catalog-content\") pod \"redhat-operators-rcwhn\" (UID: \"a9419e55-bd03-4fa7-a94c-9c27fe3264da\") " pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.361428 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2wsf\" (UniqueName: \"kubernetes.io/projected/a9419e55-bd03-4fa7-a94c-9c27fe3264da-kube-api-access-t2wsf\") pod \"redhat-operators-rcwhn\" (UID: \"a9419e55-bd03-4fa7-a94c-9c27fe3264da\") " pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.361480 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9419e55-bd03-4fa7-a94c-9c27fe3264da-utilities\") pod \"redhat-operators-rcwhn\" (UID: \"a9419e55-bd03-4fa7-a94c-9c27fe3264da\") " pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.362216 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9419e55-bd03-4fa7-a94c-9c27fe3264da-catalog-content\") pod \"redhat-operators-rcwhn\" (UID: \"a9419e55-bd03-4fa7-a94c-9c27fe3264da\") " pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.362274 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9419e55-bd03-4fa7-a94c-9c27fe3264da-utilities\") pod \"redhat-operators-rcwhn\" (UID: \"a9419e55-bd03-4fa7-a94c-9c27fe3264da\") " pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.405164 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2wsf\" (UniqueName: \"kubernetes.io/projected/a9419e55-bd03-4fa7-a94c-9c27fe3264da-kube-api-access-t2wsf\") pod \"redhat-operators-rcwhn\" (UID: \"a9419e55-bd03-4fa7-a94c-9c27fe3264da\") " pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.462512 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:41:24 crc kubenswrapper[4792]: I1127 18:41:24.971712 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rcwhn"] Nov 27 18:41:25 crc kubenswrapper[4792]: I1127 18:41:25.974381 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" containerID="49fd5988f23ab12e76064441e5fcb52d17ac49f8cc74f55b232cb1655e83c0d9" exitCode=0 Nov 27 18:41:25 crc kubenswrapper[4792]: I1127 18:41:25.974840 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcwhn" event={"ID":"a9419e55-bd03-4fa7-a94c-9c27fe3264da","Type":"ContainerDied","Data":"49fd5988f23ab12e76064441e5fcb52d17ac49f8cc74f55b232cb1655e83c0d9"} Nov 27 18:41:25 crc kubenswrapper[4792]: I1127 18:41:25.975449 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcwhn" event={"ID":"a9419e55-bd03-4fa7-a94c-9c27fe3264da","Type":"ContainerStarted","Data":"7b0f6fbc020215e930d60a856e0ef7a113b5bd75d560b9f90bc4e4130b0e6537"} Nov 27 18:41:25 crc kubenswrapper[4792]: I1127 18:41:25.978247 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 18:41:26 crc kubenswrapper[4792]: I1127 18:41:26.989161 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcwhn" event={"ID":"a9419e55-bd03-4fa7-a94c-9c27fe3264da","Type":"ContainerStarted","Data":"1213f2df051fc83ec7da0121be91740e6adcb8da5d288a8e37737229f6c1873b"} Nov 27 18:41:33 crc kubenswrapper[4792]: I1127 18:41:33.073117 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" containerID="1213f2df051fc83ec7da0121be91740e6adcb8da5d288a8e37737229f6c1873b" exitCode=0 Nov 27 18:41:33 crc kubenswrapper[4792]: I1127 18:41:33.073198 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcwhn" event={"ID":"a9419e55-bd03-4fa7-a94c-9c27fe3264da","Type":"ContainerDied","Data":"1213f2df051fc83ec7da0121be91740e6adcb8da5d288a8e37737229f6c1873b"} Nov 27 18:41:34 crc kubenswrapper[4792]: I1127 18:41:34.086602 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcwhn" event={"ID":"a9419e55-bd03-4fa7-a94c-9c27fe3264da","Type":"ContainerStarted","Data":"6060d92259eb7ae27883926d505d1172860647cdf79e8ecf8879f7d54fbaf2af"} Nov 27 18:41:34 crc kubenswrapper[4792]: I1127 18:41:34.124219 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rcwhn" podStartSLOduration=2.501521182 podStartE2EDuration="10.124187598s" podCreationTimestamp="2025-11-27 18:41:24 +0000 UTC" firstStartedPulling="2025-11-27 18:41:25.978010647 +0000 UTC m=+5508.320836955" lastFinishedPulling="2025-11-27 18:41:33.600677033 +0000 UTC m=+5515.943503371" observedRunningTime="2025-11-27 18:41:34.114448486 +0000 UTC m=+5516.457274804" watchObservedRunningTime="2025-11-27 18:41:34.124187598 +0000 UTC m=+5516.467013916" Nov 27 18:41:34 crc kubenswrapper[4792]: I1127 18:41:34.462675 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:41:34 crc kubenswrapper[4792]: I1127 18:41:34.463056 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:41:35 crc kubenswrapper[4792]: I1127 18:41:35.551960 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rcwhn" podUID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" containerName="registry-server" probeResult="failure" output=< Nov 27 18:41:35 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:41:35 crc kubenswrapper[4792]: > Nov 27 18:41:40 crc kubenswrapper[4792]: I1127 18:41:40.193760 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-7844df848f-mmmmh" podUID="25e66971-1039-45a3-9010-17efb7f2dbf6" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.94:7472/metrics\": dial tcp 10.217.0.94:7472: i/o timeout (Client.Timeout exceeded while awaiting headers)" Nov 27 18:41:40 crc kubenswrapper[4792]: I1127 18:41:40.211756 4792 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-f8648f98b-b2ds9" podUID="0056c3c2-a1e5-4733-a428-fd3b91475472" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.96:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 27 18:41:40 crc kubenswrapper[4792]: I1127 18:41:40.253432 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-rhchz" podUID="a4f24305-d786-4537-b13b-86e83451bef4" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 27 18:41:40 crc kubenswrapper[4792]: I1127 18:41:40.254489 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-t2tmt" podUID="fad170db-00fe-471f-b3b3-0201e1b54c21" containerName="registry-server" probeResult="failure" output=< Nov 27 18:41:40 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:41:40 crc kubenswrapper[4792]: > Nov 27 18:41:45 crc kubenswrapper[4792]: I1127 18:41:45.518064 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rcwhn" podUID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" containerName="registry-server" probeResult="failure" output=< Nov 27 18:41:45 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:41:45 crc kubenswrapper[4792]: > Nov 27 18:41:55 crc kubenswrapper[4792]: I1127 18:41:55.515215 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rcwhn" podUID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" containerName="registry-server" probeResult="failure" output=< Nov 27 18:41:55 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:41:55 crc kubenswrapper[4792]: > Nov 27 18:42:05 crc kubenswrapper[4792]: I1127 18:42:05.521169 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rcwhn" podUID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" containerName="registry-server" probeResult="failure" output=< Nov 27 18:42:05 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:42:05 crc kubenswrapper[4792]: > Nov 27 18:42:06 crc kubenswrapper[4792]: I1127 18:42:06.973893 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c8sc5"] Nov 27 18:42:06 crc kubenswrapper[4792]: I1127 18:42:06.977222 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:06 crc kubenswrapper[4792]: I1127 18:42:06.987300 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8sc5"] Nov 27 18:42:07 crc kubenswrapper[4792]: I1127 18:42:07.095954 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92be02c6-0814-4202-9a74-e96a7fbef1d1-catalog-content\") pod \"redhat-marketplace-c8sc5\" (UID: \"92be02c6-0814-4202-9a74-e96a7fbef1d1\") " pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:07 crc kubenswrapper[4792]: I1127 18:42:07.096306 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh9hs\" (UniqueName: \"kubernetes.io/projected/92be02c6-0814-4202-9a74-e96a7fbef1d1-kube-api-access-wh9hs\") pod \"redhat-marketplace-c8sc5\" (UID: \"92be02c6-0814-4202-9a74-e96a7fbef1d1\") " pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:07 crc kubenswrapper[4792]: I1127 18:42:07.096493 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92be02c6-0814-4202-9a74-e96a7fbef1d1-utilities\") pod \"redhat-marketplace-c8sc5\" (UID: \"92be02c6-0814-4202-9a74-e96a7fbef1d1\") " pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:07 crc kubenswrapper[4792]: I1127 18:42:07.199215 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92be02c6-0814-4202-9a74-e96a7fbef1d1-catalog-content\") pod \"redhat-marketplace-c8sc5\" (UID: \"92be02c6-0814-4202-9a74-e96a7fbef1d1\") " pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:07 crc kubenswrapper[4792]: I1127 18:42:07.199599 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh9hs\" (UniqueName: \"kubernetes.io/projected/92be02c6-0814-4202-9a74-e96a7fbef1d1-kube-api-access-wh9hs\") pod \"redhat-marketplace-c8sc5\" (UID: \"92be02c6-0814-4202-9a74-e96a7fbef1d1\") " pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:07 crc kubenswrapper[4792]: I1127 18:42:07.199669 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92be02c6-0814-4202-9a74-e96a7fbef1d1-utilities\") pod \"redhat-marketplace-c8sc5\" (UID: \"92be02c6-0814-4202-9a74-e96a7fbef1d1\") " pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:07 crc kubenswrapper[4792]: I1127 18:42:07.200274 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92be02c6-0814-4202-9a74-e96a7fbef1d1-utilities\") pod \"redhat-marketplace-c8sc5\" (UID: \"92be02c6-0814-4202-9a74-e96a7fbef1d1\") " pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:07 crc kubenswrapper[4792]: I1127 18:42:07.200469 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92be02c6-0814-4202-9a74-e96a7fbef1d1-catalog-content\") pod \"redhat-marketplace-c8sc5\" (UID: \"92be02c6-0814-4202-9a74-e96a7fbef1d1\") " pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:07 crc kubenswrapper[4792]: I1127 18:42:07.228447 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh9hs\" (UniqueName: \"kubernetes.io/projected/92be02c6-0814-4202-9a74-e96a7fbef1d1-kube-api-access-wh9hs\") pod \"redhat-marketplace-c8sc5\" (UID: \"92be02c6-0814-4202-9a74-e96a7fbef1d1\") " pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:07 crc kubenswrapper[4792]: I1127 18:42:07.304671 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:07 crc kubenswrapper[4792]: I1127 18:42:07.545060 4792 scope.go:117] "RemoveContainer" containerID="066b1c537d71a2ef0f884288de40c8affb84a9608296990052e011d2e0999bf2" Nov 27 18:42:08 crc kubenswrapper[4792]: I1127 18:42:08.033012 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8sc5"] Nov 27 18:42:08 crc kubenswrapper[4792]: I1127 18:42:08.420406 4792 scope.go:117] "RemoveContainer" containerID="8dcf01a2a071d38aa4d9f39b1a83a2d022dd90b1a2bbad9ba018f18d0b8c6020" Nov 27 18:42:08 crc kubenswrapper[4792]: I1127 18:42:08.540119 4792 scope.go:117] "RemoveContainer" containerID="41023a35e3ca9066585bb66afcbd8f69da099ea2ebebc235137123fbfac9e020" Nov 27 18:42:08 crc kubenswrapper[4792]: I1127 18:42:08.591938 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8sc5" event={"ID":"92be02c6-0814-4202-9a74-e96a7fbef1d1","Type":"ContainerStarted","Data":"f91511f31702ece6aadd7d1d41d2ec42d7acc3bcf366fd7bb5e4e563d8a13704"} Nov 27 18:42:09 crc kubenswrapper[4792]: I1127 18:42:09.605450 4792 generic.go:334] "Generic (PLEG): container finished" podID="92be02c6-0814-4202-9a74-e96a7fbef1d1" containerID="1a3754c50c918a45cf978f1864cb278434716d3e3fce15caccde6cbe018206ae" exitCode=0 Nov 27 18:42:09 crc kubenswrapper[4792]: I1127 18:42:09.605499 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8sc5" event={"ID":"92be02c6-0814-4202-9a74-e96a7fbef1d1","Type":"ContainerDied","Data":"1a3754c50c918a45cf978f1864cb278434716d3e3fce15caccde6cbe018206ae"} Nov 27 18:42:11 crc kubenswrapper[4792]: I1127 18:42:11.649864 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8sc5" event={"ID":"92be02c6-0814-4202-9a74-e96a7fbef1d1","Type":"ContainerDied","Data":"0f5cabe4c169d09d868cbddbb8a25726f7d2417d4d516d528e73c740be6b264a"} Nov 27 18:42:11 crc kubenswrapper[4792]: I1127 18:42:11.649786 4792 generic.go:334] "Generic (PLEG): container finished" podID="92be02c6-0814-4202-9a74-e96a7fbef1d1" containerID="0f5cabe4c169d09d868cbddbb8a25726f7d2417d4d516d528e73c740be6b264a" exitCode=0 Nov 27 18:42:12 crc kubenswrapper[4792]: I1127 18:42:12.663433 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8sc5" event={"ID":"92be02c6-0814-4202-9a74-e96a7fbef1d1","Type":"ContainerStarted","Data":"14a7cbae6a5cdf795baa7e4c59387dee1fb57c25b8ad22a81c3a557500b2067f"} Nov 27 18:42:12 crc kubenswrapper[4792]: I1127 18:42:12.693691 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c8sc5" podStartSLOduration=3.9586698289999998 podStartE2EDuration="6.693635824s" podCreationTimestamp="2025-11-27 18:42:06 +0000 UTC" firstStartedPulling="2025-11-27 18:42:09.608293619 +0000 UTC m=+5551.951119937" lastFinishedPulling="2025-11-27 18:42:12.343259614 +0000 UTC m=+5554.686085932" observedRunningTime="2025-11-27 18:42:12.683622215 +0000 UTC m=+5555.026448543" watchObservedRunningTime="2025-11-27 18:42:12.693635824 +0000 UTC m=+5555.036462142" Nov 27 18:42:14 crc kubenswrapper[4792]: I1127 18:42:14.561712 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:42:14 crc kubenswrapper[4792]: I1127 18:42:14.644751 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:42:15 crc kubenswrapper[4792]: I1127 18:42:15.348627 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rcwhn"] Nov 27 18:42:15 crc kubenswrapper[4792]: I1127 18:42:15.690077 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rcwhn" podUID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" containerName="registry-server" containerID="cri-o://6060d92259eb7ae27883926d505d1172860647cdf79e8ecf8879f7d54fbaf2af" gracePeriod=2 Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.488413 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.654556 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9419e55-bd03-4fa7-a94c-9c27fe3264da-catalog-content\") pod \"a9419e55-bd03-4fa7-a94c-9c27fe3264da\" (UID: \"a9419e55-bd03-4fa7-a94c-9c27fe3264da\") " Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.655012 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9419e55-bd03-4fa7-a94c-9c27fe3264da-utilities\") pod \"a9419e55-bd03-4fa7-a94c-9c27fe3264da\" (UID: \"a9419e55-bd03-4fa7-a94c-9c27fe3264da\") " Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.655241 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2wsf\" (UniqueName: \"kubernetes.io/projected/a9419e55-bd03-4fa7-a94c-9c27fe3264da-kube-api-access-t2wsf\") pod \"a9419e55-bd03-4fa7-a94c-9c27fe3264da\" (UID: \"a9419e55-bd03-4fa7-a94c-9c27fe3264da\") " Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.655979 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9419e55-bd03-4fa7-a94c-9c27fe3264da-utilities" (OuterVolumeSpecName: "utilities") pod "a9419e55-bd03-4fa7-a94c-9c27fe3264da" (UID: "a9419e55-bd03-4fa7-a94c-9c27fe3264da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.665170 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9419e55-bd03-4fa7-a94c-9c27fe3264da-kube-api-access-t2wsf" (OuterVolumeSpecName: "kube-api-access-t2wsf") pod "a9419e55-bd03-4fa7-a94c-9c27fe3264da" (UID: "a9419e55-bd03-4fa7-a94c-9c27fe3264da"). InnerVolumeSpecName "kube-api-access-t2wsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.707902 4792 generic.go:334] "Generic (PLEG): container finished" podID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" containerID="6060d92259eb7ae27883926d505d1172860647cdf79e8ecf8879f7d54fbaf2af" exitCode=0 Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.708015 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rcwhn" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.731349 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcwhn" event={"ID":"a9419e55-bd03-4fa7-a94c-9c27fe3264da","Type":"ContainerDied","Data":"6060d92259eb7ae27883926d505d1172860647cdf79e8ecf8879f7d54fbaf2af"} Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.731409 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcwhn" event={"ID":"a9419e55-bd03-4fa7-a94c-9c27fe3264da","Type":"ContainerDied","Data":"7b0f6fbc020215e930d60a856e0ef7a113b5bd75d560b9f90bc4e4130b0e6537"} Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.731437 4792 scope.go:117] "RemoveContainer" containerID="6060d92259eb7ae27883926d505d1172860647cdf79e8ecf8879f7d54fbaf2af" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.759900 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2wsf\" (UniqueName: \"kubernetes.io/projected/a9419e55-bd03-4fa7-a94c-9c27fe3264da-kube-api-access-t2wsf\") on node \"crc\" DevicePath \"\"" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.759948 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9419e55-bd03-4fa7-a94c-9c27fe3264da-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.767213 4792 scope.go:117] "RemoveContainer" containerID="1213f2df051fc83ec7da0121be91740e6adcb8da5d288a8e37737229f6c1873b" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.796319 4792 scope.go:117] "RemoveContainer" containerID="49fd5988f23ab12e76064441e5fcb52d17ac49f8cc74f55b232cb1655e83c0d9" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.825317 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9419e55-bd03-4fa7-a94c-9c27fe3264da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9419e55-bd03-4fa7-a94c-9c27fe3264da" (UID: "a9419e55-bd03-4fa7-a94c-9c27fe3264da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.863228 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9419e55-bd03-4fa7-a94c-9c27fe3264da-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.866104 4792 scope.go:117] "RemoveContainer" containerID="6060d92259eb7ae27883926d505d1172860647cdf79e8ecf8879f7d54fbaf2af" Nov 27 18:42:16 crc kubenswrapper[4792]: E1127 18:42:16.876476 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6060d92259eb7ae27883926d505d1172860647cdf79e8ecf8879f7d54fbaf2af\": container with ID starting with 6060d92259eb7ae27883926d505d1172860647cdf79e8ecf8879f7d54fbaf2af not found: ID does not exist" containerID="6060d92259eb7ae27883926d505d1172860647cdf79e8ecf8879f7d54fbaf2af" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.876542 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6060d92259eb7ae27883926d505d1172860647cdf79e8ecf8879f7d54fbaf2af"} err="failed to get container status \"6060d92259eb7ae27883926d505d1172860647cdf79e8ecf8879f7d54fbaf2af\": rpc error: code = NotFound desc = could not find container \"6060d92259eb7ae27883926d505d1172860647cdf79e8ecf8879f7d54fbaf2af\": container with ID starting with 6060d92259eb7ae27883926d505d1172860647cdf79e8ecf8879f7d54fbaf2af not found: ID does not exist" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.876579 4792 scope.go:117] "RemoveContainer" containerID="1213f2df051fc83ec7da0121be91740e6adcb8da5d288a8e37737229f6c1873b" Nov 27 18:42:16 crc kubenswrapper[4792]: E1127 18:42:16.877103 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1213f2df051fc83ec7da0121be91740e6adcb8da5d288a8e37737229f6c1873b\": container with ID starting with 1213f2df051fc83ec7da0121be91740e6adcb8da5d288a8e37737229f6c1873b not found: ID does not exist" containerID="1213f2df051fc83ec7da0121be91740e6adcb8da5d288a8e37737229f6c1873b" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.877156 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1213f2df051fc83ec7da0121be91740e6adcb8da5d288a8e37737229f6c1873b"} err="failed to get container status \"1213f2df051fc83ec7da0121be91740e6adcb8da5d288a8e37737229f6c1873b\": rpc error: code = NotFound desc = could not find container \"1213f2df051fc83ec7da0121be91740e6adcb8da5d288a8e37737229f6c1873b\": container with ID starting with 1213f2df051fc83ec7da0121be91740e6adcb8da5d288a8e37737229f6c1873b not found: ID does not exist" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.877186 4792 scope.go:117] "RemoveContainer" containerID="49fd5988f23ab12e76064441e5fcb52d17ac49f8cc74f55b232cb1655e83c0d9" Nov 27 18:42:16 crc kubenswrapper[4792]: E1127 18:42:16.877750 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49fd5988f23ab12e76064441e5fcb52d17ac49f8cc74f55b232cb1655e83c0d9\": container with ID starting with 49fd5988f23ab12e76064441e5fcb52d17ac49f8cc74f55b232cb1655e83c0d9 not found: ID does not exist" containerID="49fd5988f23ab12e76064441e5fcb52d17ac49f8cc74f55b232cb1655e83c0d9" Nov 27 18:42:16 crc kubenswrapper[4792]: I1127 18:42:16.877786 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49fd5988f23ab12e76064441e5fcb52d17ac49f8cc74f55b232cb1655e83c0d9"} err="failed to get container status \"49fd5988f23ab12e76064441e5fcb52d17ac49f8cc74f55b232cb1655e83c0d9\": rpc error: code = NotFound desc = could not find container \"49fd5988f23ab12e76064441e5fcb52d17ac49f8cc74f55b232cb1655e83c0d9\": container with ID starting with 49fd5988f23ab12e76064441e5fcb52d17ac49f8cc74f55b232cb1655e83c0d9 not found: ID does not exist" Nov 27 18:42:17 crc kubenswrapper[4792]: I1127 18:42:17.056851 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rcwhn"] Nov 27 18:42:17 crc kubenswrapper[4792]: I1127 18:42:17.066856 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rcwhn"] Nov 27 18:42:17 crc kubenswrapper[4792]: I1127 18:42:17.305019 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:17 crc kubenswrapper[4792]: I1127 18:42:17.305369 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:17 crc kubenswrapper[4792]: I1127 18:42:17.360467 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:17 crc kubenswrapper[4792]: I1127 18:42:17.772604 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:18 crc kubenswrapper[4792]: I1127 18:42:18.710774 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" path="/var/lib/kubelet/pods/a9419e55-bd03-4fa7-a94c-9c27fe3264da/volumes" Nov 27 18:42:19 crc kubenswrapper[4792]: I1127 18:42:19.749174 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8sc5"] Nov 27 18:42:20 crc kubenswrapper[4792]: I1127 18:42:20.750831 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c8sc5" podUID="92be02c6-0814-4202-9a74-e96a7fbef1d1" containerName="registry-server" containerID="cri-o://14a7cbae6a5cdf795baa7e4c59387dee1fb57c25b8ad22a81c3a557500b2067f" gracePeriod=2 Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.277507 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.375355 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh9hs\" (UniqueName: \"kubernetes.io/projected/92be02c6-0814-4202-9a74-e96a7fbef1d1-kube-api-access-wh9hs\") pod \"92be02c6-0814-4202-9a74-e96a7fbef1d1\" (UID: \"92be02c6-0814-4202-9a74-e96a7fbef1d1\") " Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.375847 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92be02c6-0814-4202-9a74-e96a7fbef1d1-utilities\") pod \"92be02c6-0814-4202-9a74-e96a7fbef1d1\" (UID: \"92be02c6-0814-4202-9a74-e96a7fbef1d1\") " Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.375947 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92be02c6-0814-4202-9a74-e96a7fbef1d1-catalog-content\") pod \"92be02c6-0814-4202-9a74-e96a7fbef1d1\" (UID: \"92be02c6-0814-4202-9a74-e96a7fbef1d1\") " Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.376738 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92be02c6-0814-4202-9a74-e96a7fbef1d1-utilities" (OuterVolumeSpecName: "utilities") pod "92be02c6-0814-4202-9a74-e96a7fbef1d1" (UID: "92be02c6-0814-4202-9a74-e96a7fbef1d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.382296 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92be02c6-0814-4202-9a74-e96a7fbef1d1-kube-api-access-wh9hs" (OuterVolumeSpecName: "kube-api-access-wh9hs") pod "92be02c6-0814-4202-9a74-e96a7fbef1d1" (UID: "92be02c6-0814-4202-9a74-e96a7fbef1d1"). InnerVolumeSpecName "kube-api-access-wh9hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.394379 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92be02c6-0814-4202-9a74-e96a7fbef1d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92be02c6-0814-4202-9a74-e96a7fbef1d1" (UID: "92be02c6-0814-4202-9a74-e96a7fbef1d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.481129 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh9hs\" (UniqueName: \"kubernetes.io/projected/92be02c6-0814-4202-9a74-e96a7fbef1d1-kube-api-access-wh9hs\") on node \"crc\" DevicePath \"\"" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.481499 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92be02c6-0814-4202-9a74-e96a7fbef1d1-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.481914 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92be02c6-0814-4202-9a74-e96a7fbef1d1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.765681 4792 generic.go:334] "Generic (PLEG): container finished" podID="92be02c6-0814-4202-9a74-e96a7fbef1d1" containerID="14a7cbae6a5cdf795baa7e4c59387dee1fb57c25b8ad22a81c3a557500b2067f" exitCode=0 Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.765739 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8sc5" event={"ID":"92be02c6-0814-4202-9a74-e96a7fbef1d1","Type":"ContainerDied","Data":"14a7cbae6a5cdf795baa7e4c59387dee1fb57c25b8ad22a81c3a557500b2067f"} Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.765784 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8sc5" event={"ID":"92be02c6-0814-4202-9a74-e96a7fbef1d1","Type":"ContainerDied","Data":"f91511f31702ece6aadd7d1d41d2ec42d7acc3bcf366fd7bb5e4e563d8a13704"} Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.765813 4792 scope.go:117] "RemoveContainer" containerID="14a7cbae6a5cdf795baa7e4c59387dee1fb57c25b8ad22a81c3a557500b2067f" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.765796 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c8sc5" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.795738 4792 scope.go:117] "RemoveContainer" containerID="0f5cabe4c169d09d868cbddbb8a25726f7d2417d4d516d528e73c740be6b264a" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.806264 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8sc5"] Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.822404 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8sc5"] Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.846920 4792 scope.go:117] "RemoveContainer" containerID="1a3754c50c918a45cf978f1864cb278434716d3e3fce15caccde6cbe018206ae" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.885218 4792 scope.go:117] "RemoveContainer" containerID="14a7cbae6a5cdf795baa7e4c59387dee1fb57c25b8ad22a81c3a557500b2067f" Nov 27 18:42:21 crc kubenswrapper[4792]: E1127 18:42:21.885789 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14a7cbae6a5cdf795baa7e4c59387dee1fb57c25b8ad22a81c3a557500b2067f\": container with ID starting with 14a7cbae6a5cdf795baa7e4c59387dee1fb57c25b8ad22a81c3a557500b2067f not found: ID does not exist" containerID="14a7cbae6a5cdf795baa7e4c59387dee1fb57c25b8ad22a81c3a557500b2067f" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.885861 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14a7cbae6a5cdf795baa7e4c59387dee1fb57c25b8ad22a81c3a557500b2067f"} err="failed to get container status \"14a7cbae6a5cdf795baa7e4c59387dee1fb57c25b8ad22a81c3a557500b2067f\": rpc error: code = NotFound desc = could not find container \"14a7cbae6a5cdf795baa7e4c59387dee1fb57c25b8ad22a81c3a557500b2067f\": container with ID starting with 14a7cbae6a5cdf795baa7e4c59387dee1fb57c25b8ad22a81c3a557500b2067f not found: ID does not exist" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.886028 4792 scope.go:117] "RemoveContainer" containerID="0f5cabe4c169d09d868cbddbb8a25726f7d2417d4d516d528e73c740be6b264a" Nov 27 18:42:21 crc kubenswrapper[4792]: E1127 18:42:21.886529 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f5cabe4c169d09d868cbddbb8a25726f7d2417d4d516d528e73c740be6b264a\": container with ID starting with 0f5cabe4c169d09d868cbddbb8a25726f7d2417d4d516d528e73c740be6b264a not found: ID does not exist" containerID="0f5cabe4c169d09d868cbddbb8a25726f7d2417d4d516d528e73c740be6b264a" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.886633 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f5cabe4c169d09d868cbddbb8a25726f7d2417d4d516d528e73c740be6b264a"} err="failed to get container status \"0f5cabe4c169d09d868cbddbb8a25726f7d2417d4d516d528e73c740be6b264a\": rpc error: code = NotFound desc = could not find container \"0f5cabe4c169d09d868cbddbb8a25726f7d2417d4d516d528e73c740be6b264a\": container with ID starting with 0f5cabe4c169d09d868cbddbb8a25726f7d2417d4d516d528e73c740be6b264a not found: ID does not exist" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.886770 4792 scope.go:117] "RemoveContainer" containerID="1a3754c50c918a45cf978f1864cb278434716d3e3fce15caccde6cbe018206ae" Nov 27 18:42:21 crc kubenswrapper[4792]: E1127 18:42:21.887120 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a3754c50c918a45cf978f1864cb278434716d3e3fce15caccde6cbe018206ae\": container with ID starting with 1a3754c50c918a45cf978f1864cb278434716d3e3fce15caccde6cbe018206ae not found: ID does not exist" containerID="1a3754c50c918a45cf978f1864cb278434716d3e3fce15caccde6cbe018206ae" Nov 27 18:42:21 crc kubenswrapper[4792]: I1127 18:42:21.887164 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a3754c50c918a45cf978f1864cb278434716d3e3fce15caccde6cbe018206ae"} err="failed to get container status \"1a3754c50c918a45cf978f1864cb278434716d3e3fce15caccde6cbe018206ae\": rpc error: code = NotFound desc = could not find container \"1a3754c50c918a45cf978f1864cb278434716d3e3fce15caccde6cbe018206ae\": container with ID starting with 1a3754c50c918a45cf978f1864cb278434716d3e3fce15caccde6cbe018206ae not found: ID does not exist" Nov 27 18:42:22 crc kubenswrapper[4792]: I1127 18:42:22.712425 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92be02c6-0814-4202-9a74-e96a7fbef1d1" path="/var/lib/kubelet/pods/92be02c6-0814-4202-9a74-e96a7fbef1d1/volumes" Nov 27 18:43:08 crc kubenswrapper[4792]: I1127 18:43:08.291082 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:43:08 crc kubenswrapper[4792]: I1127 18:43:08.292070 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:43:38 crc kubenswrapper[4792]: I1127 18:43:38.290450 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:43:38 crc kubenswrapper[4792]: I1127 18:43:38.291052 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:44:08 crc kubenswrapper[4792]: I1127 18:44:08.290789 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:44:08 crc kubenswrapper[4792]: I1127 18:44:08.291429 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:44:08 crc kubenswrapper[4792]: I1127 18:44:08.291490 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 18:44:08 crc kubenswrapper[4792]: I1127 18:44:08.292616 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 18:44:08 crc kubenswrapper[4792]: I1127 18:44:08.292721 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" gracePeriod=600 Nov 27 18:44:08 crc kubenswrapper[4792]: E1127 18:44:08.417064 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:44:09 crc kubenswrapper[4792]: I1127 18:44:09.017147 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" exitCode=0 Nov 27 18:44:09 crc kubenswrapper[4792]: I1127 18:44:09.017200 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6"} Nov 27 18:44:09 crc kubenswrapper[4792]: I1127 18:44:09.017233 4792 scope.go:117] "RemoveContainer" containerID="08f43da188f335c21767911cf49a6a85da6f4da4668f81f41b0b0ddc255b4613" Nov 27 18:44:09 crc kubenswrapper[4792]: I1127 18:44:09.018271 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:44:09 crc kubenswrapper[4792]: E1127 18:44:09.018699 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:44:20 crc kubenswrapper[4792]: I1127 18:44:20.687217 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:44:20 crc kubenswrapper[4792]: E1127 18:44:20.688107 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:44:31 crc kubenswrapper[4792]: I1127 18:44:31.689121 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:44:31 crc kubenswrapper[4792]: E1127 18:44:31.690354 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:44:45 crc kubenswrapper[4792]: I1127 18:44:45.686526 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:44:45 crc kubenswrapper[4792]: E1127 18:44:45.687343 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.205974 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc"] Nov 27 18:45:00 crc kubenswrapper[4792]: E1127 18:45:00.207819 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" containerName="registry-server" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.207844 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" containerName="registry-server" Nov 27 18:45:00 crc kubenswrapper[4792]: E1127 18:45:00.207859 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" containerName="extract-content" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.207866 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" containerName="extract-content" Nov 27 18:45:00 crc kubenswrapper[4792]: E1127 18:45:00.207882 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92be02c6-0814-4202-9a74-e96a7fbef1d1" containerName="extract-content" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.207891 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="92be02c6-0814-4202-9a74-e96a7fbef1d1" containerName="extract-content" Nov 27 18:45:00 crc kubenswrapper[4792]: E1127 18:45:00.207919 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92be02c6-0814-4202-9a74-e96a7fbef1d1" containerName="extract-utilities" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.207926 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="92be02c6-0814-4202-9a74-e96a7fbef1d1" containerName="extract-utilities" Nov 27 18:45:00 crc kubenswrapper[4792]: E1127 18:45:00.207935 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" containerName="extract-utilities" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.207941 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" containerName="extract-utilities" Nov 27 18:45:00 crc kubenswrapper[4792]: E1127 18:45:00.207954 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92be02c6-0814-4202-9a74-e96a7fbef1d1" containerName="registry-server" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.207960 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="92be02c6-0814-4202-9a74-e96a7fbef1d1" containerName="registry-server" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.208324 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9419e55-bd03-4fa7-a94c-9c27fe3264da" containerName="registry-server" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.208342 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="92be02c6-0814-4202-9a74-e96a7fbef1d1" containerName="registry-server" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.209743 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.234453 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc"] Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.242259 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.243812 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.255794 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttdtv\" (UniqueName: \"kubernetes.io/projected/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-kube-api-access-ttdtv\") pod \"collect-profiles-29404485-ss9mc\" (UID: \"f3e1c210-4cf2-4154-8aa3-4bf4087c5254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.255923 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-config-volume\") pod \"collect-profiles-29404485-ss9mc\" (UID: \"f3e1c210-4cf2-4154-8aa3-4bf4087c5254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.256922 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-secret-volume\") pod \"collect-profiles-29404485-ss9mc\" (UID: \"f3e1c210-4cf2-4154-8aa3-4bf4087c5254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.359496 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-secret-volume\") pod \"collect-profiles-29404485-ss9mc\" (UID: \"f3e1c210-4cf2-4154-8aa3-4bf4087c5254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.359597 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttdtv\" (UniqueName: \"kubernetes.io/projected/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-kube-api-access-ttdtv\") pod \"collect-profiles-29404485-ss9mc\" (UID: \"f3e1c210-4cf2-4154-8aa3-4bf4087c5254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.359706 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-config-volume\") pod \"collect-profiles-29404485-ss9mc\" (UID: \"f3e1c210-4cf2-4154-8aa3-4bf4087c5254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.361019 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-config-volume\") pod \"collect-profiles-29404485-ss9mc\" (UID: \"f3e1c210-4cf2-4154-8aa3-4bf4087c5254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.369767 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-secret-volume\") pod \"collect-profiles-29404485-ss9mc\" (UID: \"f3e1c210-4cf2-4154-8aa3-4bf4087c5254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.383434 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttdtv\" (UniqueName: \"kubernetes.io/projected/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-kube-api-access-ttdtv\") pod \"collect-profiles-29404485-ss9mc\" (UID: \"f3e1c210-4cf2-4154-8aa3-4bf4087c5254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.544730 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" Nov 27 18:45:00 crc kubenswrapper[4792]: I1127 18:45:00.690557 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:45:00 crc kubenswrapper[4792]: E1127 18:45:00.691412 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:45:01 crc kubenswrapper[4792]: I1127 18:45:01.031535 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc"] Nov 27 18:45:01 crc kubenswrapper[4792]: I1127 18:45:01.631706 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" event={"ID":"f3e1c210-4cf2-4154-8aa3-4bf4087c5254","Type":"ContainerStarted","Data":"49edd8337da19f678593edd1950115a38a62693748fe4a7f2cc2a3b7dc68ca5a"} Nov 27 18:45:01 crc kubenswrapper[4792]: I1127 18:45:01.632039 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" event={"ID":"f3e1c210-4cf2-4154-8aa3-4bf4087c5254","Type":"ContainerStarted","Data":"6fd2b7f0c5d7b31f5d701159733b6a775f857e37b19c8e48dcfa11adf736f80d"} Nov 27 18:45:01 crc kubenswrapper[4792]: I1127 18:45:01.652942 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" podStartSLOduration=1.65291786 podStartE2EDuration="1.65291786s" podCreationTimestamp="2025-11-27 18:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 18:45:01.646861169 +0000 UTC m=+5723.989687487" watchObservedRunningTime="2025-11-27 18:45:01.65291786 +0000 UTC m=+5723.995744178" Nov 27 18:45:02 crc kubenswrapper[4792]: I1127 18:45:02.646946 4792 generic.go:334] "Generic (PLEG): container finished" podID="f3e1c210-4cf2-4154-8aa3-4bf4087c5254" containerID="49edd8337da19f678593edd1950115a38a62693748fe4a7f2cc2a3b7dc68ca5a" exitCode=0 Nov 27 18:45:02 crc kubenswrapper[4792]: I1127 18:45:02.647022 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" event={"ID":"f3e1c210-4cf2-4154-8aa3-4bf4087c5254","Type":"ContainerDied","Data":"49edd8337da19f678593edd1950115a38a62693748fe4a7f2cc2a3b7dc68ca5a"} Nov 27 18:45:04 crc kubenswrapper[4792]: I1127 18:45:04.135568 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" Nov 27 18:45:04 crc kubenswrapper[4792]: I1127 18:45:04.190601 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-config-volume\") pod \"f3e1c210-4cf2-4154-8aa3-4bf4087c5254\" (UID: \"f3e1c210-4cf2-4154-8aa3-4bf4087c5254\") " Nov 27 18:45:04 crc kubenswrapper[4792]: I1127 18:45:04.190856 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttdtv\" (UniqueName: \"kubernetes.io/projected/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-kube-api-access-ttdtv\") pod \"f3e1c210-4cf2-4154-8aa3-4bf4087c5254\" (UID: \"f3e1c210-4cf2-4154-8aa3-4bf4087c5254\") " Nov 27 18:45:04 crc kubenswrapper[4792]: I1127 18:45:04.190941 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-secret-volume\") pod \"f3e1c210-4cf2-4154-8aa3-4bf4087c5254\" (UID: \"f3e1c210-4cf2-4154-8aa3-4bf4087c5254\") " Nov 27 18:45:04 crc kubenswrapper[4792]: I1127 18:45:04.191292 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-config-volume" (OuterVolumeSpecName: "config-volume") pod "f3e1c210-4cf2-4154-8aa3-4bf4087c5254" (UID: "f3e1c210-4cf2-4154-8aa3-4bf4087c5254"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 18:45:04 crc kubenswrapper[4792]: I1127 18:45:04.191518 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-config-volume\") on node \"crc\" DevicePath \"\"" Nov 27 18:45:04 crc kubenswrapper[4792]: I1127 18:45:04.199078 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-kube-api-access-ttdtv" (OuterVolumeSpecName: "kube-api-access-ttdtv") pod "f3e1c210-4cf2-4154-8aa3-4bf4087c5254" (UID: "f3e1c210-4cf2-4154-8aa3-4bf4087c5254"). InnerVolumeSpecName "kube-api-access-ttdtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:45:04 crc kubenswrapper[4792]: I1127 18:45:04.201103 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f3e1c210-4cf2-4154-8aa3-4bf4087c5254" (UID: "f3e1c210-4cf2-4154-8aa3-4bf4087c5254"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:45:04 crc kubenswrapper[4792]: I1127 18:45:04.294957 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttdtv\" (UniqueName: \"kubernetes.io/projected/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-kube-api-access-ttdtv\") on node \"crc\" DevicePath \"\"" Nov 27 18:45:04 crc kubenswrapper[4792]: I1127 18:45:04.294996 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3e1c210-4cf2-4154-8aa3-4bf4087c5254-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 27 18:45:04 crc kubenswrapper[4792]: I1127 18:45:04.675036 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" event={"ID":"f3e1c210-4cf2-4154-8aa3-4bf4087c5254","Type":"ContainerDied","Data":"6fd2b7f0c5d7b31f5d701159733b6a775f857e37b19c8e48dcfa11adf736f80d"} Nov 27 18:45:04 crc kubenswrapper[4792]: I1127 18:45:04.675086 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fd2b7f0c5d7b31f5d701159733b6a775f857e37b19c8e48dcfa11adf736f80d" Nov 27 18:45:04 crc kubenswrapper[4792]: I1127 18:45:04.675460 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404485-ss9mc" Nov 27 18:45:04 crc kubenswrapper[4792]: I1127 18:45:04.745423 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s"] Nov 27 18:45:04 crc kubenswrapper[4792]: I1127 18:45:04.756493 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404440-vfl6s"] Nov 27 18:45:06 crc kubenswrapper[4792]: I1127 18:45:06.703556 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="111c652f-251a-4e87-857d-f6c8df36396d" path="/var/lib/kubelet/pods/111c652f-251a-4e87-857d-f6c8df36396d/volumes" Nov 27 18:45:08 crc kubenswrapper[4792]: I1127 18:45:08.774839 4792 scope.go:117] "RemoveContainer" containerID="9fc7603010df5dd4cf1a90359e034145e062f0f394d5d8e9bc6770cb6a604f18" Nov 27 18:45:13 crc kubenswrapper[4792]: I1127 18:45:13.687043 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:45:13 crc kubenswrapper[4792]: E1127 18:45:13.687871 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:45:25 crc kubenswrapper[4792]: I1127 18:45:25.687718 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:45:25 crc kubenswrapper[4792]: E1127 18:45:25.688520 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:45:26 crc kubenswrapper[4792]: I1127 18:45:26.657952 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5jn54"] Nov 27 18:45:26 crc kubenswrapper[4792]: E1127 18:45:26.659411 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e1c210-4cf2-4154-8aa3-4bf4087c5254" containerName="collect-profiles" Nov 27 18:45:26 crc kubenswrapper[4792]: I1127 18:45:26.659443 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e1c210-4cf2-4154-8aa3-4bf4087c5254" containerName="collect-profiles" Nov 27 18:45:26 crc kubenswrapper[4792]: I1127 18:45:26.659779 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e1c210-4cf2-4154-8aa3-4bf4087c5254" containerName="collect-profiles" Nov 27 18:45:26 crc kubenswrapper[4792]: I1127 18:45:26.662580 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:26 crc kubenswrapper[4792]: I1127 18:45:26.673653 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jn54"] Nov 27 18:45:26 crc kubenswrapper[4792]: I1127 18:45:26.737306 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-utilities\") pod \"community-operators-5jn54\" (UID: \"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f\") " pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:26 crc kubenswrapper[4792]: I1127 18:45:26.737457 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-catalog-content\") pod \"community-operators-5jn54\" (UID: \"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f\") " pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:26 crc kubenswrapper[4792]: I1127 18:45:26.737492 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js5bg\" (UniqueName: \"kubernetes.io/projected/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-kube-api-access-js5bg\") pod \"community-operators-5jn54\" (UID: \"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f\") " pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:26 crc kubenswrapper[4792]: I1127 18:45:26.840503 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-utilities\") pod \"community-operators-5jn54\" (UID: \"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f\") " pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:26 crc kubenswrapper[4792]: I1127 18:45:26.840609 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-catalog-content\") pod \"community-operators-5jn54\" (UID: \"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f\") " pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:26 crc kubenswrapper[4792]: I1127 18:45:26.840633 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js5bg\" (UniqueName: \"kubernetes.io/projected/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-kube-api-access-js5bg\") pod \"community-operators-5jn54\" (UID: \"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f\") " pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:26 crc kubenswrapper[4792]: I1127 18:45:26.842385 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-catalog-content\") pod \"community-operators-5jn54\" (UID: \"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f\") " pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:26 crc kubenswrapper[4792]: I1127 18:45:26.842393 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-utilities\") pod \"community-operators-5jn54\" (UID: \"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f\") " pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:26 crc kubenswrapper[4792]: I1127 18:45:26.865867 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js5bg\" (UniqueName: \"kubernetes.io/projected/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-kube-api-access-js5bg\") pod \"community-operators-5jn54\" (UID: \"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f\") " pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:27 crc kubenswrapper[4792]: I1127 18:45:27.026951 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:27 crc kubenswrapper[4792]: I1127 18:45:27.782060 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jn54"] Nov 27 18:45:27 crc kubenswrapper[4792]: W1127 18:45:27.784535 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c3fde56_ac05_4626_bbd1_cc6f46fa5a2f.slice/crio-a1e0d5897717ce040d37b7c3f5772d6c9df71462104c1926f30ac2fbb1572003 WatchSource:0}: Error finding container a1e0d5897717ce040d37b7c3f5772d6c9df71462104c1926f30ac2fbb1572003: Status 404 returned error can't find the container with id a1e0d5897717ce040d37b7c3f5772d6c9df71462104c1926f30ac2fbb1572003 Nov 27 18:45:27 crc kubenswrapper[4792]: I1127 18:45:27.973268 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jn54" event={"ID":"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f","Type":"ContainerStarted","Data":"a1e0d5897717ce040d37b7c3f5772d6c9df71462104c1926f30ac2fbb1572003"} Nov 27 18:45:28 crc kubenswrapper[4792]: I1127 18:45:28.986888 4792 generic.go:334] "Generic (PLEG): container finished" podID="2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" containerID="223f2962768ed75f8e919a3f1ff2e1cb3d1db74e4256ba4c9da3307878105ea3" exitCode=0 Nov 27 18:45:28 crc kubenswrapper[4792]: I1127 18:45:28.987147 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jn54" event={"ID":"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f","Type":"ContainerDied","Data":"223f2962768ed75f8e919a3f1ff2e1cb3d1db74e4256ba4c9da3307878105ea3"} Nov 27 18:45:31 crc kubenswrapper[4792]: I1127 18:45:31.010068 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jn54" event={"ID":"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f","Type":"ContainerStarted","Data":"f57c1576bb295ed918ad8cb5a7e851b59ade69d3b369eb0266e473b8b83cb9c3"} Nov 27 18:45:33 crc kubenswrapper[4792]: I1127 18:45:33.033941 4792 generic.go:334] "Generic (PLEG): container finished" podID="2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" containerID="f57c1576bb295ed918ad8cb5a7e851b59ade69d3b369eb0266e473b8b83cb9c3" exitCode=0 Nov 27 18:45:33 crc kubenswrapper[4792]: I1127 18:45:33.034045 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jn54" event={"ID":"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f","Type":"ContainerDied","Data":"f57c1576bb295ed918ad8cb5a7e851b59ade69d3b369eb0266e473b8b83cb9c3"} Nov 27 18:45:34 crc kubenswrapper[4792]: I1127 18:45:34.048550 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jn54" event={"ID":"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f","Type":"ContainerStarted","Data":"97d390caf7c24de89d9a98ab39766db9eda247c9e143eeab9fb9013808b14571"} Nov 27 18:45:34 crc kubenswrapper[4792]: I1127 18:45:34.073978 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5jn54" podStartSLOduration=3.627838536 podStartE2EDuration="8.07395885s" podCreationTimestamp="2025-11-27 18:45:26 +0000 UTC" firstStartedPulling="2025-11-27 18:45:28.989770023 +0000 UTC m=+5751.332596341" lastFinishedPulling="2025-11-27 18:45:33.435890337 +0000 UTC m=+5755.778716655" observedRunningTime="2025-11-27 18:45:34.072243847 +0000 UTC m=+5756.415070165" watchObservedRunningTime="2025-11-27 18:45:34.07395885 +0000 UTC m=+5756.416785168" Nov 27 18:45:36 crc kubenswrapper[4792]: I1127 18:45:36.687254 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:45:36 crc kubenswrapper[4792]: E1127 18:45:36.688019 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:45:37 crc kubenswrapper[4792]: I1127 18:45:37.027926 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:37 crc kubenswrapper[4792]: I1127 18:45:37.028284 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:38 crc kubenswrapper[4792]: I1127 18:45:38.093730 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5jn54" podUID="2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" containerName="registry-server" probeResult="failure" output=< Nov 27 18:45:38 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:45:38 crc kubenswrapper[4792]: > Nov 27 18:45:48 crc kubenswrapper[4792]: I1127 18:45:48.086141 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5jn54" podUID="2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" containerName="registry-server" probeResult="failure" output=< Nov 27 18:45:48 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:45:48 crc kubenswrapper[4792]: > Nov 27 18:45:48 crc kubenswrapper[4792]: I1127 18:45:48.698609 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:45:48 crc kubenswrapper[4792]: E1127 18:45:48.699415 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:45:57 crc kubenswrapper[4792]: I1127 18:45:57.080169 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:57 crc kubenswrapper[4792]: I1127 18:45:57.132046 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:57 crc kubenswrapper[4792]: I1127 18:45:57.855575 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jn54"] Nov 27 18:45:58 crc kubenswrapper[4792]: I1127 18:45:58.297136 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5jn54" podUID="2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" containerName="registry-server" containerID="cri-o://97d390caf7c24de89d9a98ab39766db9eda247c9e143eeab9fb9013808b14571" gracePeriod=2 Nov 27 18:45:58 crc kubenswrapper[4792]: I1127 18:45:58.884777 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:58 crc kubenswrapper[4792]: I1127 18:45:58.917244 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js5bg\" (UniqueName: \"kubernetes.io/projected/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-kube-api-access-js5bg\") pod \"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f\" (UID: \"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f\") " Nov 27 18:45:58 crc kubenswrapper[4792]: I1127 18:45:58.923740 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-kube-api-access-js5bg" (OuterVolumeSpecName: "kube-api-access-js5bg") pod "2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" (UID: "2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f"). InnerVolumeSpecName "kube-api-access-js5bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.019188 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-catalog-content\") pod \"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f\" (UID: \"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f\") " Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.019560 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-utilities\") pod \"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f\" (UID: \"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f\") " Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.020351 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-utilities" (OuterVolumeSpecName: "utilities") pod "2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" (UID: "2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.020422 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js5bg\" (UniqueName: \"kubernetes.io/projected/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-kube-api-access-js5bg\") on node \"crc\" DevicePath \"\"" Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.078848 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" (UID: "2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.122984 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.123037 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.309631 4792 generic.go:334] "Generic (PLEG): container finished" podID="2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" containerID="97d390caf7c24de89d9a98ab39766db9eda247c9e143eeab9fb9013808b14571" exitCode=0 Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.309698 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jn54" event={"ID":"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f","Type":"ContainerDied","Data":"97d390caf7c24de89d9a98ab39766db9eda247c9e143eeab9fb9013808b14571"} Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.309753 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jn54" event={"ID":"2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f","Type":"ContainerDied","Data":"a1e0d5897717ce040d37b7c3f5772d6c9df71462104c1926f30ac2fbb1572003"} Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.309710 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jn54" Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.309779 4792 scope.go:117] "RemoveContainer" containerID="97d390caf7c24de89d9a98ab39766db9eda247c9e143eeab9fb9013808b14571" Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.344492 4792 scope.go:117] "RemoveContainer" containerID="f57c1576bb295ed918ad8cb5a7e851b59ade69d3b369eb0266e473b8b83cb9c3" Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.351396 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jn54"] Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.360824 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5jn54"] Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.366947 4792 scope.go:117] "RemoveContainer" containerID="223f2962768ed75f8e919a3f1ff2e1cb3d1db74e4256ba4c9da3307878105ea3" Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.434734 4792 scope.go:117] "RemoveContainer" containerID="97d390caf7c24de89d9a98ab39766db9eda247c9e143eeab9fb9013808b14571" Nov 27 18:45:59 crc kubenswrapper[4792]: E1127 18:45:59.435061 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97d390caf7c24de89d9a98ab39766db9eda247c9e143eeab9fb9013808b14571\": container with ID starting with 97d390caf7c24de89d9a98ab39766db9eda247c9e143eeab9fb9013808b14571 not found: ID does not exist" containerID="97d390caf7c24de89d9a98ab39766db9eda247c9e143eeab9fb9013808b14571" Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.435090 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97d390caf7c24de89d9a98ab39766db9eda247c9e143eeab9fb9013808b14571"} err="failed to get container status \"97d390caf7c24de89d9a98ab39766db9eda247c9e143eeab9fb9013808b14571\": rpc error: code = NotFound desc = could not find container \"97d390caf7c24de89d9a98ab39766db9eda247c9e143eeab9fb9013808b14571\": container with ID starting with 97d390caf7c24de89d9a98ab39766db9eda247c9e143eeab9fb9013808b14571 not found: ID does not exist" Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.435113 4792 scope.go:117] "RemoveContainer" containerID="f57c1576bb295ed918ad8cb5a7e851b59ade69d3b369eb0266e473b8b83cb9c3" Nov 27 18:45:59 crc kubenswrapper[4792]: E1127 18:45:59.435323 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f57c1576bb295ed918ad8cb5a7e851b59ade69d3b369eb0266e473b8b83cb9c3\": container with ID starting with f57c1576bb295ed918ad8cb5a7e851b59ade69d3b369eb0266e473b8b83cb9c3 not found: ID does not exist" containerID="f57c1576bb295ed918ad8cb5a7e851b59ade69d3b369eb0266e473b8b83cb9c3" Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.435343 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f57c1576bb295ed918ad8cb5a7e851b59ade69d3b369eb0266e473b8b83cb9c3"} err="failed to get container status \"f57c1576bb295ed918ad8cb5a7e851b59ade69d3b369eb0266e473b8b83cb9c3\": rpc error: code = NotFound desc = could not find container \"f57c1576bb295ed918ad8cb5a7e851b59ade69d3b369eb0266e473b8b83cb9c3\": container with ID starting with f57c1576bb295ed918ad8cb5a7e851b59ade69d3b369eb0266e473b8b83cb9c3 not found: ID does not exist" Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.435354 4792 scope.go:117] "RemoveContainer" containerID="223f2962768ed75f8e919a3f1ff2e1cb3d1db74e4256ba4c9da3307878105ea3" Nov 27 18:45:59 crc kubenswrapper[4792]: E1127 18:45:59.435537 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"223f2962768ed75f8e919a3f1ff2e1cb3d1db74e4256ba4c9da3307878105ea3\": container with ID starting with 223f2962768ed75f8e919a3f1ff2e1cb3d1db74e4256ba4c9da3307878105ea3 not found: ID does not exist" containerID="223f2962768ed75f8e919a3f1ff2e1cb3d1db74e4256ba4c9da3307878105ea3" Nov 27 18:45:59 crc kubenswrapper[4792]: I1127 18:45:59.435572 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"223f2962768ed75f8e919a3f1ff2e1cb3d1db74e4256ba4c9da3307878105ea3"} err="failed to get container status \"223f2962768ed75f8e919a3f1ff2e1cb3d1db74e4256ba4c9da3307878105ea3\": rpc error: code = NotFound desc = could not find container \"223f2962768ed75f8e919a3f1ff2e1cb3d1db74e4256ba4c9da3307878105ea3\": container with ID starting with 223f2962768ed75f8e919a3f1ff2e1cb3d1db74e4256ba4c9da3307878105ea3 not found: ID does not exist" Nov 27 18:45:59 crc kubenswrapper[4792]: E1127 18:45:59.541026 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c3fde56_ac05_4626_bbd1_cc6f46fa5a2f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c3fde56_ac05_4626_bbd1_cc6f46fa5a2f.slice/crio-a1e0d5897717ce040d37b7c3f5772d6c9df71462104c1926f30ac2fbb1572003\": RecentStats: unable to find data in memory cache]" Nov 27 18:46:00 crc kubenswrapper[4792]: I1127 18:46:00.698162 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" path="/var/lib/kubelet/pods/2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f/volumes" Nov 27 18:46:02 crc kubenswrapper[4792]: I1127 18:46:02.686458 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:46:02 crc kubenswrapper[4792]: E1127 18:46:02.687210 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:46:14 crc kubenswrapper[4792]: I1127 18:46:14.687345 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:46:14 crc kubenswrapper[4792]: E1127 18:46:14.688270 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:46:26 crc kubenswrapper[4792]: I1127 18:46:26.687964 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:46:26 crc kubenswrapper[4792]: E1127 18:46:26.689101 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:46:39 crc kubenswrapper[4792]: I1127 18:46:39.686741 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:46:39 crc kubenswrapper[4792]: E1127 18:46:39.687897 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:46:52 crc kubenswrapper[4792]: I1127 18:46:52.688498 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:46:52 crc kubenswrapper[4792]: E1127 18:46:52.689502 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:47:03 crc kubenswrapper[4792]: I1127 18:47:03.687113 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:47:03 crc kubenswrapper[4792]: E1127 18:47:03.688939 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:47:15 crc kubenswrapper[4792]: I1127 18:47:15.687691 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:47:15 crc kubenswrapper[4792]: E1127 18:47:15.688758 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:47:28 crc kubenswrapper[4792]: I1127 18:47:28.695330 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:47:28 crc kubenswrapper[4792]: E1127 18:47:28.696081 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:47:39 crc kubenswrapper[4792]: I1127 18:47:39.687529 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:47:39 crc kubenswrapper[4792]: E1127 18:47:39.688494 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:47:50 crc kubenswrapper[4792]: I1127 18:47:50.687372 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:47:50 crc kubenswrapper[4792]: E1127 18:47:50.688202 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:48:04 crc kubenswrapper[4792]: I1127 18:48:04.687003 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:48:04 crc kubenswrapper[4792]: E1127 18:48:04.688828 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:48:17 crc kubenswrapper[4792]: I1127 18:48:17.689320 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:48:17 crc kubenswrapper[4792]: E1127 18:48:17.690214 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:48:30 crc kubenswrapper[4792]: I1127 18:48:30.695907 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:48:30 crc kubenswrapper[4792]: E1127 18:48:30.696850 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:48:42 crc kubenswrapper[4792]: I1127 18:48:42.687522 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:48:42 crc kubenswrapper[4792]: E1127 18:48:42.688489 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:48:49 crc kubenswrapper[4792]: I1127 18:48:49.197154 4792 generic.go:334] "Generic (PLEG): container finished" podID="57348a1d-d6f9-4844-894d-b837afec3bdc" containerID="83efb2f6992c05c48118ee9c183ecc05db85a14aca0770e34c35636faf66b39d" exitCode=0 Nov 27 18:48:49 crc kubenswrapper[4792]: I1127 18:48:49.197322 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"57348a1d-d6f9-4844-894d-b837afec3bdc","Type":"ContainerDied","Data":"83efb2f6992c05c48118ee9c183ecc05db85a14aca0770e34c35636faf66b39d"} Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.621762 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.735926 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/57348a1d-d6f9-4844-894d-b837afec3bdc-test-operator-ephemeral-temporary\") pod \"57348a1d-d6f9-4844-894d-b837afec3bdc\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.736048 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/57348a1d-d6f9-4844-894d-b837afec3bdc-openstack-config\") pod \"57348a1d-d6f9-4844-894d-b837afec3bdc\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.736071 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-ca-certs\") pod \"57348a1d-d6f9-4844-894d-b837afec3bdc\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.736103 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/57348a1d-d6f9-4844-894d-b837afec3bdc-test-operator-ephemeral-workdir\") pod \"57348a1d-d6f9-4844-894d-b837afec3bdc\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.736149 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm5z6\" (UniqueName: \"kubernetes.io/projected/57348a1d-d6f9-4844-894d-b837afec3bdc-kube-api-access-hm5z6\") pod \"57348a1d-d6f9-4844-894d-b837afec3bdc\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.736178 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-openstack-config-secret\") pod \"57348a1d-d6f9-4844-894d-b837afec3bdc\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.736254 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"57348a1d-d6f9-4844-894d-b837afec3bdc\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.736421 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57348a1d-d6f9-4844-894d-b837afec3bdc-config-data\") pod \"57348a1d-d6f9-4844-894d-b837afec3bdc\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.736795 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57348a1d-d6f9-4844-894d-b837afec3bdc-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "57348a1d-d6f9-4844-894d-b837afec3bdc" (UID: "57348a1d-d6f9-4844-894d-b837afec3bdc"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.737090 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-ssh-key\") pod \"57348a1d-d6f9-4844-894d-b837afec3bdc\" (UID: \"57348a1d-d6f9-4844-894d-b837afec3bdc\") " Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.737550 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57348a1d-d6f9-4844-894d-b837afec3bdc-config-data" (OuterVolumeSpecName: "config-data") pod "57348a1d-d6f9-4844-894d-b837afec3bdc" (UID: "57348a1d-d6f9-4844-894d-b837afec3bdc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.737854 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57348a1d-d6f9-4844-894d-b837afec3bdc-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.737879 4792 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/57348a1d-d6f9-4844-894d-b837afec3bdc-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.742794 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "57348a1d-d6f9-4844-894d-b837afec3bdc" (UID: "57348a1d-d6f9-4844-894d-b837afec3bdc"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.743386 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57348a1d-d6f9-4844-894d-b837afec3bdc-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "57348a1d-d6f9-4844-894d-b837afec3bdc" (UID: "57348a1d-d6f9-4844-894d-b837afec3bdc"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.745172 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57348a1d-d6f9-4844-894d-b837afec3bdc-kube-api-access-hm5z6" (OuterVolumeSpecName: "kube-api-access-hm5z6") pod "57348a1d-d6f9-4844-894d-b837afec3bdc" (UID: "57348a1d-d6f9-4844-894d-b837afec3bdc"). InnerVolumeSpecName "kube-api-access-hm5z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.773698 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "57348a1d-d6f9-4844-894d-b837afec3bdc" (UID: "57348a1d-d6f9-4844-894d-b837afec3bdc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.785706 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "57348a1d-d6f9-4844-894d-b837afec3bdc" (UID: "57348a1d-d6f9-4844-894d-b837afec3bdc"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.787254 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "57348a1d-d6f9-4844-894d-b837afec3bdc" (UID: "57348a1d-d6f9-4844-894d-b837afec3bdc"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.808530 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57348a1d-d6f9-4844-894d-b837afec3bdc-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "57348a1d-d6f9-4844-894d-b837afec3bdc" (UID: "57348a1d-d6f9-4844-894d-b837afec3bdc"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.839776 4792 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.841302 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/57348a1d-d6f9-4844-894d-b837afec3bdc-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.841412 4792 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.841496 4792 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/57348a1d-d6f9-4844-894d-b837afec3bdc-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.841558 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm5z6\" (UniqueName: \"kubernetes.io/projected/57348a1d-d6f9-4844-894d-b837afec3bdc-kube-api-access-hm5z6\") on node \"crc\" DevicePath \"\"" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.841677 4792 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/57348a1d-d6f9-4844-894d-b837afec3bdc-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.843338 4792 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.872832 4792 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 27 18:48:50 crc kubenswrapper[4792]: I1127 18:48:50.945783 4792 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 27 18:48:51 crc kubenswrapper[4792]: I1127 18:48:51.221081 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"57348a1d-d6f9-4844-894d-b837afec3bdc","Type":"ContainerDied","Data":"6ca97e342409cd3104a14f9e8f5962beda401a70d39da3dc4464786ed09e0177"} Nov 27 18:48:51 crc kubenswrapper[4792]: I1127 18:48:51.221123 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ca97e342409cd3104a14f9e8f5962beda401a70d39da3dc4464786ed09e0177" Nov 27 18:48:51 crc kubenswrapper[4792]: I1127 18:48:51.221190 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 27 18:48:55 crc kubenswrapper[4792]: I1127 18:48:55.835019 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 27 18:48:55 crc kubenswrapper[4792]: E1127 18:48:55.836090 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" containerName="registry-server" Nov 27 18:48:55 crc kubenswrapper[4792]: I1127 18:48:55.836105 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" containerName="registry-server" Nov 27 18:48:55 crc kubenswrapper[4792]: E1127 18:48:55.836126 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57348a1d-d6f9-4844-894d-b837afec3bdc" containerName="tempest-tests-tempest-tests-runner" Nov 27 18:48:55 crc kubenswrapper[4792]: I1127 18:48:55.836133 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="57348a1d-d6f9-4844-894d-b837afec3bdc" containerName="tempest-tests-tempest-tests-runner" Nov 27 18:48:55 crc kubenswrapper[4792]: E1127 18:48:55.836155 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" containerName="extract-utilities" Nov 27 18:48:55 crc kubenswrapper[4792]: I1127 18:48:55.836162 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" containerName="extract-utilities" Nov 27 18:48:55 crc kubenswrapper[4792]: E1127 18:48:55.836197 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" containerName="extract-content" Nov 27 18:48:55 crc kubenswrapper[4792]: I1127 18:48:55.836203 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" containerName="extract-content" Nov 27 18:48:55 crc kubenswrapper[4792]: I1127 18:48:55.836440 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="57348a1d-d6f9-4844-894d-b837afec3bdc" containerName="tempest-tests-tempest-tests-runner" Nov 27 18:48:55 crc kubenswrapper[4792]: I1127 18:48:55.836470 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c3fde56-ac05-4626-bbd1-cc6f46fa5a2f" containerName="registry-server" Nov 27 18:48:55 crc kubenswrapper[4792]: I1127 18:48:55.837299 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 27 18:48:55 crc kubenswrapper[4792]: I1127 18:48:55.839178 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wrltw" Nov 27 18:48:55 crc kubenswrapper[4792]: I1127 18:48:55.851230 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 27 18:48:55 crc kubenswrapper[4792]: I1127 18:48:55.883253 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"858cefc2-c01c-428d-852b-f3599dda658b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 27 18:48:55 crc kubenswrapper[4792]: I1127 18:48:55.883412 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktkd8\" (UniqueName: \"kubernetes.io/projected/858cefc2-c01c-428d-852b-f3599dda658b-kube-api-access-ktkd8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"858cefc2-c01c-428d-852b-f3599dda658b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 27 18:48:55 crc kubenswrapper[4792]: I1127 18:48:55.986192 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"858cefc2-c01c-428d-852b-f3599dda658b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 27 18:48:55 crc kubenswrapper[4792]: I1127 18:48:55.986326 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktkd8\" (UniqueName: \"kubernetes.io/projected/858cefc2-c01c-428d-852b-f3599dda658b-kube-api-access-ktkd8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"858cefc2-c01c-428d-852b-f3599dda658b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 27 18:48:55 crc kubenswrapper[4792]: I1127 18:48:55.988175 4792 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"858cefc2-c01c-428d-852b-f3599dda658b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 27 18:48:56 crc kubenswrapper[4792]: I1127 18:48:56.006422 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktkd8\" (UniqueName: \"kubernetes.io/projected/858cefc2-c01c-428d-852b-f3599dda658b-kube-api-access-ktkd8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"858cefc2-c01c-428d-852b-f3599dda658b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 27 18:48:56 crc kubenswrapper[4792]: I1127 18:48:56.022174 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"858cefc2-c01c-428d-852b-f3599dda658b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 27 18:48:56 crc kubenswrapper[4792]: I1127 18:48:56.158717 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 27 18:48:56 crc kubenswrapper[4792]: I1127 18:48:56.691507 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 27 18:48:56 crc kubenswrapper[4792]: I1127 18:48:56.706277 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 18:48:57 crc kubenswrapper[4792]: I1127 18:48:57.310072 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"858cefc2-c01c-428d-852b-f3599dda658b","Type":"ContainerStarted","Data":"b5ccc880f37fbc7921a3c5b4409de2b5af0f059b6078ee539295187d51d40a6c"} Nov 27 18:48:57 crc kubenswrapper[4792]: I1127 18:48:57.687427 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:48:57 crc kubenswrapper[4792]: E1127 18:48:57.687993 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:48:58 crc kubenswrapper[4792]: I1127 18:48:58.322681 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"858cefc2-c01c-428d-852b-f3599dda658b","Type":"ContainerStarted","Data":"cd24104a912261312757e267af766ff76db1e1fc3d566541ce44784e62ac257f"} Nov 27 18:49:12 crc kubenswrapper[4792]: I1127 18:49:12.688481 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:49:13 crc kubenswrapper[4792]: I1127 18:49:13.507403 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"00e5eb1067167c84c4609eaef8fcbbad526cbcc37291f40611bc937ef3d3b277"} Nov 27 18:49:13 crc kubenswrapper[4792]: I1127 18:49:13.539750 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=17.25626162 podStartE2EDuration="18.539706017s" podCreationTimestamp="2025-11-27 18:48:55 +0000 UTC" firstStartedPulling="2025-11-27 18:48:56.705961584 +0000 UTC m=+5959.048787902" lastFinishedPulling="2025-11-27 18:48:57.989405971 +0000 UTC m=+5960.332232299" observedRunningTime="2025-11-27 18:48:58.345719249 +0000 UTC m=+5960.688545587" watchObservedRunningTime="2025-11-27 18:49:13.539706017 +0000 UTC m=+5975.882532335" Nov 27 18:49:28 crc kubenswrapper[4792]: I1127 18:49:28.307245 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c4htl/must-gather-csjgx"] Nov 27 18:49:28 crc kubenswrapper[4792]: I1127 18:49:28.311287 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/must-gather-csjgx" Nov 27 18:49:28 crc kubenswrapper[4792]: I1127 18:49:28.321996 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-c4htl"/"default-dockercfg-f6fb4" Nov 27 18:49:28 crc kubenswrapper[4792]: I1127 18:49:28.322210 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c4htl"/"kube-root-ca.crt" Nov 27 18:49:28 crc kubenswrapper[4792]: I1127 18:49:28.322777 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c4htl"/"openshift-service-ca.crt" Nov 27 18:49:28 crc kubenswrapper[4792]: I1127 18:49:28.343189 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c4htl/must-gather-csjgx"] Nov 27 18:49:28 crc kubenswrapper[4792]: I1127 18:49:28.394776 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq2qh\" (UniqueName: \"kubernetes.io/projected/288241f0-8105-4cd4-9e5b-b3abb299e2ef-kube-api-access-zq2qh\") pod \"must-gather-csjgx\" (UID: \"288241f0-8105-4cd4-9e5b-b3abb299e2ef\") " pod="openshift-must-gather-c4htl/must-gather-csjgx" Nov 27 18:49:28 crc kubenswrapper[4792]: I1127 18:49:28.395199 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/288241f0-8105-4cd4-9e5b-b3abb299e2ef-must-gather-output\") pod \"must-gather-csjgx\" (UID: \"288241f0-8105-4cd4-9e5b-b3abb299e2ef\") " pod="openshift-must-gather-c4htl/must-gather-csjgx" Nov 27 18:49:28 crc kubenswrapper[4792]: I1127 18:49:28.497829 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq2qh\" (UniqueName: \"kubernetes.io/projected/288241f0-8105-4cd4-9e5b-b3abb299e2ef-kube-api-access-zq2qh\") pod \"must-gather-csjgx\" (UID: \"288241f0-8105-4cd4-9e5b-b3abb299e2ef\") " pod="openshift-must-gather-c4htl/must-gather-csjgx" Nov 27 18:49:28 crc kubenswrapper[4792]: I1127 18:49:28.498001 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/288241f0-8105-4cd4-9e5b-b3abb299e2ef-must-gather-output\") pod \"must-gather-csjgx\" (UID: \"288241f0-8105-4cd4-9e5b-b3abb299e2ef\") " pod="openshift-must-gather-c4htl/must-gather-csjgx" Nov 27 18:49:28 crc kubenswrapper[4792]: I1127 18:49:28.498539 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/288241f0-8105-4cd4-9e5b-b3abb299e2ef-must-gather-output\") pod \"must-gather-csjgx\" (UID: \"288241f0-8105-4cd4-9e5b-b3abb299e2ef\") " pod="openshift-must-gather-c4htl/must-gather-csjgx" Nov 27 18:49:28 crc kubenswrapper[4792]: I1127 18:49:28.516828 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq2qh\" (UniqueName: \"kubernetes.io/projected/288241f0-8105-4cd4-9e5b-b3abb299e2ef-kube-api-access-zq2qh\") pod \"must-gather-csjgx\" (UID: \"288241f0-8105-4cd4-9e5b-b3abb299e2ef\") " pod="openshift-must-gather-c4htl/must-gather-csjgx" Nov 27 18:49:28 crc kubenswrapper[4792]: I1127 18:49:28.649830 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/must-gather-csjgx" Nov 27 18:49:29 crc kubenswrapper[4792]: I1127 18:49:29.145948 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c4htl/must-gather-csjgx"] Nov 27 18:49:29 crc kubenswrapper[4792]: I1127 18:49:29.683711 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4htl/must-gather-csjgx" event={"ID":"288241f0-8105-4cd4-9e5b-b3abb299e2ef","Type":"ContainerStarted","Data":"2501799a9d5b94830673951f5c141fc1ac6ad2a959d4ad403f802996ca3acd3d"} Nov 27 18:49:39 crc kubenswrapper[4792]: I1127 18:49:39.892971 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4htl/must-gather-csjgx" event={"ID":"288241f0-8105-4cd4-9e5b-b3abb299e2ef","Type":"ContainerStarted","Data":"2fbbb649966ccb11c0c231f4bfc9e198ddd089bc202fcf9152b57ddcd4c67e99"} Nov 27 18:49:39 crc kubenswrapper[4792]: I1127 18:49:39.893564 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4htl/must-gather-csjgx" event={"ID":"288241f0-8105-4cd4-9e5b-b3abb299e2ef","Type":"ContainerStarted","Data":"fbd2364d7a81e7a919fef0351722b0fb6769db5de1111fc85d0a5b0bdf0afff4"} Nov 27 18:49:39 crc kubenswrapper[4792]: I1127 18:49:39.911545 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c4htl/must-gather-csjgx" podStartSLOduration=1.7734547360000001 podStartE2EDuration="11.911526817s" podCreationTimestamp="2025-11-27 18:49:28 +0000 UTC" firstStartedPulling="2025-11-27 18:49:29.16022144 +0000 UTC m=+5991.503047758" lastFinishedPulling="2025-11-27 18:49:39.298293521 +0000 UTC m=+6001.641119839" observedRunningTime="2025-11-27 18:49:39.907745213 +0000 UTC m=+6002.250571551" watchObservedRunningTime="2025-11-27 18:49:39.911526817 +0000 UTC m=+6002.254353135" Nov 27 18:49:44 crc kubenswrapper[4792]: I1127 18:49:44.968463 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c4htl/crc-debug-bvtt8"] Nov 27 18:49:44 crc kubenswrapper[4792]: I1127 18:49:44.973314 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/crc-debug-bvtt8" Nov 27 18:49:45 crc kubenswrapper[4792]: I1127 18:49:45.126521 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fs9p\" (UniqueName: \"kubernetes.io/projected/312b3061-c3ea-4c30-a7d2-4606e20c3f68-kube-api-access-5fs9p\") pod \"crc-debug-bvtt8\" (UID: \"312b3061-c3ea-4c30-a7d2-4606e20c3f68\") " pod="openshift-must-gather-c4htl/crc-debug-bvtt8" Nov 27 18:49:45 crc kubenswrapper[4792]: I1127 18:49:45.126974 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/312b3061-c3ea-4c30-a7d2-4606e20c3f68-host\") pod \"crc-debug-bvtt8\" (UID: \"312b3061-c3ea-4c30-a7d2-4606e20c3f68\") " pod="openshift-must-gather-c4htl/crc-debug-bvtt8" Nov 27 18:49:45 crc kubenswrapper[4792]: I1127 18:49:45.229677 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fs9p\" (UniqueName: \"kubernetes.io/projected/312b3061-c3ea-4c30-a7d2-4606e20c3f68-kube-api-access-5fs9p\") pod \"crc-debug-bvtt8\" (UID: \"312b3061-c3ea-4c30-a7d2-4606e20c3f68\") " pod="openshift-must-gather-c4htl/crc-debug-bvtt8" Nov 27 18:49:45 crc kubenswrapper[4792]: I1127 18:49:45.229774 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/312b3061-c3ea-4c30-a7d2-4606e20c3f68-host\") pod \"crc-debug-bvtt8\" (UID: \"312b3061-c3ea-4c30-a7d2-4606e20c3f68\") " pod="openshift-must-gather-c4htl/crc-debug-bvtt8" Nov 27 18:49:45 crc kubenswrapper[4792]: I1127 18:49:45.230794 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/312b3061-c3ea-4c30-a7d2-4606e20c3f68-host\") pod \"crc-debug-bvtt8\" (UID: \"312b3061-c3ea-4c30-a7d2-4606e20c3f68\") " pod="openshift-must-gather-c4htl/crc-debug-bvtt8" Nov 27 18:49:45 crc kubenswrapper[4792]: I1127 18:49:45.255461 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fs9p\" (UniqueName: \"kubernetes.io/projected/312b3061-c3ea-4c30-a7d2-4606e20c3f68-kube-api-access-5fs9p\") pod \"crc-debug-bvtt8\" (UID: \"312b3061-c3ea-4c30-a7d2-4606e20c3f68\") " pod="openshift-must-gather-c4htl/crc-debug-bvtt8" Nov 27 18:49:45 crc kubenswrapper[4792]: I1127 18:49:45.296713 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/crc-debug-bvtt8" Nov 27 18:49:45 crc kubenswrapper[4792]: I1127 18:49:45.951346 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4htl/crc-debug-bvtt8" event={"ID":"312b3061-c3ea-4c30-a7d2-4606e20c3f68","Type":"ContainerStarted","Data":"114c3b6253fe3f6c93fa95d566cf2c5f599e58a409861fb99f8d79bc6c565e50"} Nov 27 18:49:59 crc kubenswrapper[4792]: I1127 18:49:59.113440 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4htl/crc-debug-bvtt8" event={"ID":"312b3061-c3ea-4c30-a7d2-4606e20c3f68","Type":"ContainerStarted","Data":"2b81889969be77cd9fa9996d97463aedbf9232ebff467192f8cbb6036ed34484"} Nov 27 18:49:59 crc kubenswrapper[4792]: I1127 18:49:59.135391 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c4htl/crc-debug-bvtt8" podStartSLOduration=2.23159041 podStartE2EDuration="15.1353514s" podCreationTimestamp="2025-11-27 18:49:44 +0000 UTC" firstStartedPulling="2025-11-27 18:49:45.348323971 +0000 UTC m=+6007.691150289" lastFinishedPulling="2025-11-27 18:49:58.252084961 +0000 UTC m=+6020.594911279" observedRunningTime="2025-11-27 18:49:59.133788631 +0000 UTC m=+6021.476614949" watchObservedRunningTime="2025-11-27 18:49:59.1353514 +0000 UTC m=+6021.478177718" Nov 27 18:50:50 crc kubenswrapper[4792]: I1127 18:50:50.683700 4792 generic.go:334] "Generic (PLEG): container finished" podID="312b3061-c3ea-4c30-a7d2-4606e20c3f68" containerID="2b81889969be77cd9fa9996d97463aedbf9232ebff467192f8cbb6036ed34484" exitCode=0 Nov 27 18:50:50 crc kubenswrapper[4792]: I1127 18:50:50.683788 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4htl/crc-debug-bvtt8" event={"ID":"312b3061-c3ea-4c30-a7d2-4606e20c3f68","Type":"ContainerDied","Data":"2b81889969be77cd9fa9996d97463aedbf9232ebff467192f8cbb6036ed34484"} Nov 27 18:50:51 crc kubenswrapper[4792]: I1127 18:50:51.817946 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/crc-debug-bvtt8" Nov 27 18:50:51 crc kubenswrapper[4792]: I1127 18:50:51.867491 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c4htl/crc-debug-bvtt8"] Nov 27 18:50:51 crc kubenswrapper[4792]: I1127 18:50:51.878895 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c4htl/crc-debug-bvtt8"] Nov 27 18:50:51 crc kubenswrapper[4792]: I1127 18:50:51.892065 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/312b3061-c3ea-4c30-a7d2-4606e20c3f68-host\") pod \"312b3061-c3ea-4c30-a7d2-4606e20c3f68\" (UID: \"312b3061-c3ea-4c30-a7d2-4606e20c3f68\") " Nov 27 18:50:51 crc kubenswrapper[4792]: I1127 18:50:51.892196 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/312b3061-c3ea-4c30-a7d2-4606e20c3f68-host" (OuterVolumeSpecName: "host") pod "312b3061-c3ea-4c30-a7d2-4606e20c3f68" (UID: "312b3061-c3ea-4c30-a7d2-4606e20c3f68"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 18:50:51 crc kubenswrapper[4792]: I1127 18:50:51.892213 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fs9p\" (UniqueName: \"kubernetes.io/projected/312b3061-c3ea-4c30-a7d2-4606e20c3f68-kube-api-access-5fs9p\") pod \"312b3061-c3ea-4c30-a7d2-4606e20c3f68\" (UID: \"312b3061-c3ea-4c30-a7d2-4606e20c3f68\") " Nov 27 18:50:51 crc kubenswrapper[4792]: I1127 18:50:51.893143 4792 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/312b3061-c3ea-4c30-a7d2-4606e20c3f68-host\") on node \"crc\" DevicePath \"\"" Nov 27 18:50:51 crc kubenswrapper[4792]: I1127 18:50:51.899400 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312b3061-c3ea-4c30-a7d2-4606e20c3f68-kube-api-access-5fs9p" (OuterVolumeSpecName: "kube-api-access-5fs9p") pod "312b3061-c3ea-4c30-a7d2-4606e20c3f68" (UID: "312b3061-c3ea-4c30-a7d2-4606e20c3f68"). InnerVolumeSpecName "kube-api-access-5fs9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:50:51 crc kubenswrapper[4792]: I1127 18:50:51.995101 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fs9p\" (UniqueName: \"kubernetes.io/projected/312b3061-c3ea-4c30-a7d2-4606e20c3f68-kube-api-access-5fs9p\") on node \"crc\" DevicePath \"\"" Nov 27 18:50:52 crc kubenswrapper[4792]: I1127 18:50:52.705261 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312b3061-c3ea-4c30-a7d2-4606e20c3f68" path="/var/lib/kubelet/pods/312b3061-c3ea-4c30-a7d2-4606e20c3f68/volumes" Nov 27 18:50:52 crc kubenswrapper[4792]: I1127 18:50:52.711459 4792 scope.go:117] "RemoveContainer" containerID="2b81889969be77cd9fa9996d97463aedbf9232ebff467192f8cbb6036ed34484" Nov 27 18:50:52 crc kubenswrapper[4792]: I1127 18:50:52.711840 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/crc-debug-bvtt8" Nov 27 18:50:53 crc kubenswrapper[4792]: I1127 18:50:53.057602 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c4htl/crc-debug-dwjwx"] Nov 27 18:50:53 crc kubenswrapper[4792]: E1127 18:50:53.058123 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312b3061-c3ea-4c30-a7d2-4606e20c3f68" containerName="container-00" Nov 27 18:50:53 crc kubenswrapper[4792]: I1127 18:50:53.058141 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="312b3061-c3ea-4c30-a7d2-4606e20c3f68" containerName="container-00" Nov 27 18:50:53 crc kubenswrapper[4792]: I1127 18:50:53.058441 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="312b3061-c3ea-4c30-a7d2-4606e20c3f68" containerName="container-00" Nov 27 18:50:53 crc kubenswrapper[4792]: I1127 18:50:53.059325 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/crc-debug-dwjwx" Nov 27 18:50:53 crc kubenswrapper[4792]: I1127 18:50:53.229338 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw5mg\" (UniqueName: \"kubernetes.io/projected/f1ee25d4-5218-42cb-b785-e6fc76524d3d-kube-api-access-rw5mg\") pod \"crc-debug-dwjwx\" (UID: \"f1ee25d4-5218-42cb-b785-e6fc76524d3d\") " pod="openshift-must-gather-c4htl/crc-debug-dwjwx" Nov 27 18:50:53 crc kubenswrapper[4792]: I1127 18:50:53.229393 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1ee25d4-5218-42cb-b785-e6fc76524d3d-host\") pod \"crc-debug-dwjwx\" (UID: \"f1ee25d4-5218-42cb-b785-e6fc76524d3d\") " pod="openshift-must-gather-c4htl/crc-debug-dwjwx" Nov 27 18:50:53 crc kubenswrapper[4792]: I1127 18:50:53.332007 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1ee25d4-5218-42cb-b785-e6fc76524d3d-host\") pod \"crc-debug-dwjwx\" (UID: \"f1ee25d4-5218-42cb-b785-e6fc76524d3d\") " pod="openshift-must-gather-c4htl/crc-debug-dwjwx" Nov 27 18:50:53 crc kubenswrapper[4792]: I1127 18:50:53.332164 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1ee25d4-5218-42cb-b785-e6fc76524d3d-host\") pod \"crc-debug-dwjwx\" (UID: \"f1ee25d4-5218-42cb-b785-e6fc76524d3d\") " pod="openshift-must-gather-c4htl/crc-debug-dwjwx" Nov 27 18:50:53 crc kubenswrapper[4792]: I1127 18:50:53.332450 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw5mg\" (UniqueName: \"kubernetes.io/projected/f1ee25d4-5218-42cb-b785-e6fc76524d3d-kube-api-access-rw5mg\") pod \"crc-debug-dwjwx\" (UID: \"f1ee25d4-5218-42cb-b785-e6fc76524d3d\") " pod="openshift-must-gather-c4htl/crc-debug-dwjwx" Nov 27 18:50:53 crc kubenswrapper[4792]: I1127 18:50:53.363634 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw5mg\" (UniqueName: \"kubernetes.io/projected/f1ee25d4-5218-42cb-b785-e6fc76524d3d-kube-api-access-rw5mg\") pod \"crc-debug-dwjwx\" (UID: \"f1ee25d4-5218-42cb-b785-e6fc76524d3d\") " pod="openshift-must-gather-c4htl/crc-debug-dwjwx" Nov 27 18:50:53 crc kubenswrapper[4792]: I1127 18:50:53.388216 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/crc-debug-dwjwx" Nov 27 18:50:53 crc kubenswrapper[4792]: I1127 18:50:53.725864 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4htl/crc-debug-dwjwx" event={"ID":"f1ee25d4-5218-42cb-b785-e6fc76524d3d","Type":"ContainerStarted","Data":"d80f7fd7d0b1967dd1944c9e0f44cece71d4ef92fc4bed8dbfc950953b6a3598"} Nov 27 18:50:53 crc kubenswrapper[4792]: I1127 18:50:53.726274 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4htl/crc-debug-dwjwx" event={"ID":"f1ee25d4-5218-42cb-b785-e6fc76524d3d","Type":"ContainerStarted","Data":"e7d27afa80884f0dac4fae8fbadabafcea75d88b2e78671ecd1fefe4ee2cc250"} Nov 27 18:50:53 crc kubenswrapper[4792]: I1127 18:50:53.767183 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c4htl/crc-debug-dwjwx" podStartSLOduration=0.767155139 podStartE2EDuration="767.155139ms" podCreationTimestamp="2025-11-27 18:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 18:50:53.742482436 +0000 UTC m=+6076.085308754" watchObservedRunningTime="2025-11-27 18:50:53.767155139 +0000 UTC m=+6076.109981457" Nov 27 18:50:54 crc kubenswrapper[4792]: I1127 18:50:54.741992 4792 generic.go:334] "Generic (PLEG): container finished" podID="f1ee25d4-5218-42cb-b785-e6fc76524d3d" containerID="d80f7fd7d0b1967dd1944c9e0f44cece71d4ef92fc4bed8dbfc950953b6a3598" exitCode=0 Nov 27 18:50:54 crc kubenswrapper[4792]: I1127 18:50:54.742042 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4htl/crc-debug-dwjwx" event={"ID":"f1ee25d4-5218-42cb-b785-e6fc76524d3d","Type":"ContainerDied","Data":"d80f7fd7d0b1967dd1944c9e0f44cece71d4ef92fc4bed8dbfc950953b6a3598"} Nov 27 18:50:55 crc kubenswrapper[4792]: I1127 18:50:55.876219 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/crc-debug-dwjwx" Nov 27 18:50:55 crc kubenswrapper[4792]: I1127 18:50:55.991745 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1ee25d4-5218-42cb-b785-e6fc76524d3d-host\") pod \"f1ee25d4-5218-42cb-b785-e6fc76524d3d\" (UID: \"f1ee25d4-5218-42cb-b785-e6fc76524d3d\") " Nov 27 18:50:55 crc kubenswrapper[4792]: I1127 18:50:55.991996 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw5mg\" (UniqueName: \"kubernetes.io/projected/f1ee25d4-5218-42cb-b785-e6fc76524d3d-kube-api-access-rw5mg\") pod \"f1ee25d4-5218-42cb-b785-e6fc76524d3d\" (UID: \"f1ee25d4-5218-42cb-b785-e6fc76524d3d\") " Nov 27 18:50:55 crc kubenswrapper[4792]: I1127 18:50:55.991996 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1ee25d4-5218-42cb-b785-e6fc76524d3d-host" (OuterVolumeSpecName: "host") pod "f1ee25d4-5218-42cb-b785-e6fc76524d3d" (UID: "f1ee25d4-5218-42cb-b785-e6fc76524d3d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 18:50:55 crc kubenswrapper[4792]: I1127 18:50:55.992705 4792 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1ee25d4-5218-42cb-b785-e6fc76524d3d-host\") on node \"crc\" DevicePath \"\"" Nov 27 18:50:55 crc kubenswrapper[4792]: I1127 18:50:55.997824 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ee25d4-5218-42cb-b785-e6fc76524d3d-kube-api-access-rw5mg" (OuterVolumeSpecName: "kube-api-access-rw5mg") pod "f1ee25d4-5218-42cb-b785-e6fc76524d3d" (UID: "f1ee25d4-5218-42cb-b785-e6fc76524d3d"). InnerVolumeSpecName "kube-api-access-rw5mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:50:56 crc kubenswrapper[4792]: I1127 18:50:56.094480 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw5mg\" (UniqueName: \"kubernetes.io/projected/f1ee25d4-5218-42cb-b785-e6fc76524d3d-kube-api-access-rw5mg\") on node \"crc\" DevicePath \"\"" Nov 27 18:50:56 crc kubenswrapper[4792]: I1127 18:50:56.673930 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c4htl/crc-debug-dwjwx"] Nov 27 18:50:56 crc kubenswrapper[4792]: I1127 18:50:56.685762 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c4htl/crc-debug-dwjwx"] Nov 27 18:50:56 crc kubenswrapper[4792]: I1127 18:50:56.699987 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1ee25d4-5218-42cb-b785-e6fc76524d3d" path="/var/lib/kubelet/pods/f1ee25d4-5218-42cb-b785-e6fc76524d3d/volumes" Nov 27 18:50:56 crc kubenswrapper[4792]: I1127 18:50:56.764215 4792 scope.go:117] "RemoveContainer" containerID="d80f7fd7d0b1967dd1944c9e0f44cece71d4ef92fc4bed8dbfc950953b6a3598" Nov 27 18:50:56 crc kubenswrapper[4792]: I1127 18:50:56.764272 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/crc-debug-dwjwx" Nov 27 18:50:57 crc kubenswrapper[4792]: I1127 18:50:57.877621 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c4htl/crc-debug-qmzll"] Nov 27 18:50:57 crc kubenswrapper[4792]: E1127 18:50:57.878795 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1ee25d4-5218-42cb-b785-e6fc76524d3d" containerName="container-00" Nov 27 18:50:57 crc kubenswrapper[4792]: I1127 18:50:57.878817 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1ee25d4-5218-42cb-b785-e6fc76524d3d" containerName="container-00" Nov 27 18:50:57 crc kubenswrapper[4792]: I1127 18:50:57.879052 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1ee25d4-5218-42cb-b785-e6fc76524d3d" containerName="container-00" Nov 27 18:50:57 crc kubenswrapper[4792]: I1127 18:50:57.880320 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/crc-debug-qmzll" Nov 27 18:50:58 crc kubenswrapper[4792]: I1127 18:50:58.037966 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v257s\" (UniqueName: \"kubernetes.io/projected/991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9-kube-api-access-v257s\") pod \"crc-debug-qmzll\" (UID: \"991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9\") " pod="openshift-must-gather-c4htl/crc-debug-qmzll" Nov 27 18:50:58 crc kubenswrapper[4792]: I1127 18:50:58.038198 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9-host\") pod \"crc-debug-qmzll\" (UID: \"991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9\") " pod="openshift-must-gather-c4htl/crc-debug-qmzll" Nov 27 18:50:58 crc kubenswrapper[4792]: I1127 18:50:58.140586 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9-host\") pod \"crc-debug-qmzll\" (UID: \"991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9\") " pod="openshift-must-gather-c4htl/crc-debug-qmzll" Nov 27 18:50:58 crc kubenswrapper[4792]: I1127 18:50:58.141124 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v257s\" (UniqueName: \"kubernetes.io/projected/991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9-kube-api-access-v257s\") pod \"crc-debug-qmzll\" (UID: \"991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9\") " pod="openshift-must-gather-c4htl/crc-debug-qmzll" Nov 27 18:50:58 crc kubenswrapper[4792]: I1127 18:50:58.141825 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9-host\") pod \"crc-debug-qmzll\" (UID: \"991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9\") " pod="openshift-must-gather-c4htl/crc-debug-qmzll" Nov 27 18:50:58 crc kubenswrapper[4792]: I1127 18:50:58.166132 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v257s\" (UniqueName: \"kubernetes.io/projected/991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9-kube-api-access-v257s\") pod \"crc-debug-qmzll\" (UID: \"991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9\") " pod="openshift-must-gather-c4htl/crc-debug-qmzll" Nov 27 18:50:58 crc kubenswrapper[4792]: I1127 18:50:58.199544 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/crc-debug-qmzll" Nov 27 18:50:58 crc kubenswrapper[4792]: W1127 18:50:58.249250 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod991edcfc_e2e8_41f9_9b55_cf8f6b65a6d9.slice/crio-3c9ddbcdc2f63de8c54a23dea0d87f4a131f2d3c42f030a6e936adbcbbc65b5e WatchSource:0}: Error finding container 3c9ddbcdc2f63de8c54a23dea0d87f4a131f2d3c42f030a6e936adbcbbc65b5e: Status 404 returned error can't find the container with id 3c9ddbcdc2f63de8c54a23dea0d87f4a131f2d3c42f030a6e936adbcbbc65b5e Nov 27 18:50:58 crc kubenswrapper[4792]: I1127 18:50:58.789763 4792 generic.go:334] "Generic (PLEG): container finished" podID="991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9" containerID="48efb4ffca35b18956c9e835f5bc5855251ce248a799229744460797c1340e5b" exitCode=0 Nov 27 18:50:58 crc kubenswrapper[4792]: I1127 18:50:58.789946 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4htl/crc-debug-qmzll" event={"ID":"991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9","Type":"ContainerDied","Data":"48efb4ffca35b18956c9e835f5bc5855251ce248a799229744460797c1340e5b"} Nov 27 18:50:58 crc kubenswrapper[4792]: I1127 18:50:58.790214 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4htl/crc-debug-qmzll" event={"ID":"991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9","Type":"ContainerStarted","Data":"3c9ddbcdc2f63de8c54a23dea0d87f4a131f2d3c42f030a6e936adbcbbc65b5e"} Nov 27 18:50:58 crc kubenswrapper[4792]: I1127 18:50:58.843970 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c4htl/crc-debug-qmzll"] Nov 27 18:50:58 crc kubenswrapper[4792]: I1127 18:50:58.859298 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c4htl/crc-debug-qmzll"] Nov 27 18:50:59 crc kubenswrapper[4792]: I1127 18:50:59.931586 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/crc-debug-qmzll" Nov 27 18:51:00 crc kubenswrapper[4792]: I1127 18:51:00.084600 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v257s\" (UniqueName: \"kubernetes.io/projected/991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9-kube-api-access-v257s\") pod \"991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9\" (UID: \"991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9\") " Nov 27 18:51:00 crc kubenswrapper[4792]: I1127 18:51:00.084662 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9-host\") pod \"991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9\" (UID: \"991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9\") " Nov 27 18:51:00 crc kubenswrapper[4792]: I1127 18:51:00.084791 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9-host" (OuterVolumeSpecName: "host") pod "991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9" (UID: "991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 18:51:00 crc kubenswrapper[4792]: I1127 18:51:00.085477 4792 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9-host\") on node \"crc\" DevicePath \"\"" Nov 27 18:51:00 crc kubenswrapper[4792]: I1127 18:51:00.094891 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9-kube-api-access-v257s" (OuterVolumeSpecName: "kube-api-access-v257s") pod "991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9" (UID: "991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9"). InnerVolumeSpecName "kube-api-access-v257s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:51:00 crc kubenswrapper[4792]: I1127 18:51:00.187839 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v257s\" (UniqueName: \"kubernetes.io/projected/991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9-kube-api-access-v257s\") on node \"crc\" DevicePath \"\"" Nov 27 18:51:00 crc kubenswrapper[4792]: I1127 18:51:00.702202 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9" path="/var/lib/kubelet/pods/991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9/volumes" Nov 27 18:51:00 crc kubenswrapper[4792]: I1127 18:51:00.816406 4792 scope.go:117] "RemoveContainer" containerID="48efb4ffca35b18956c9e835f5bc5855251ce248a799229744460797c1340e5b" Nov 27 18:51:00 crc kubenswrapper[4792]: I1127 18:51:00.816544 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/crc-debug-qmzll" Nov 27 18:51:14 crc kubenswrapper[4792]: I1127 18:51:14.417903 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8kklg"] Nov 27 18:51:14 crc kubenswrapper[4792]: E1127 18:51:14.419101 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9" containerName="container-00" Nov 27 18:51:14 crc kubenswrapper[4792]: I1127 18:51:14.419121 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9" containerName="container-00" Nov 27 18:51:14 crc kubenswrapper[4792]: I1127 18:51:14.419417 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="991edcfc-e2e8-41f9-9b55-cf8f6b65a6d9" containerName="container-00" Nov 27 18:51:14 crc kubenswrapper[4792]: I1127 18:51:14.421551 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:14 crc kubenswrapper[4792]: I1127 18:51:14.484755 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kklg"] Nov 27 18:51:14 crc kubenswrapper[4792]: I1127 18:51:14.536709 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bspv7\" (UniqueName: \"kubernetes.io/projected/c5237db4-a83c-4acb-bbe6-6ae44ece3322-kube-api-access-bspv7\") pod \"certified-operators-8kklg\" (UID: \"c5237db4-a83c-4acb-bbe6-6ae44ece3322\") " pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:14 crc kubenswrapper[4792]: I1127 18:51:14.537276 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5237db4-a83c-4acb-bbe6-6ae44ece3322-utilities\") pod \"certified-operators-8kklg\" (UID: \"c5237db4-a83c-4acb-bbe6-6ae44ece3322\") " pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:14 crc kubenswrapper[4792]: I1127 18:51:14.537302 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5237db4-a83c-4acb-bbe6-6ae44ece3322-catalog-content\") pod \"certified-operators-8kklg\" (UID: \"c5237db4-a83c-4acb-bbe6-6ae44ece3322\") " pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:14 crc kubenswrapper[4792]: I1127 18:51:14.639105 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bspv7\" (UniqueName: \"kubernetes.io/projected/c5237db4-a83c-4acb-bbe6-6ae44ece3322-kube-api-access-bspv7\") pod \"certified-operators-8kklg\" (UID: \"c5237db4-a83c-4acb-bbe6-6ae44ece3322\") " pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:14 crc kubenswrapper[4792]: I1127 18:51:14.639318 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5237db4-a83c-4acb-bbe6-6ae44ece3322-utilities\") pod \"certified-operators-8kklg\" (UID: \"c5237db4-a83c-4acb-bbe6-6ae44ece3322\") " pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:14 crc kubenswrapper[4792]: I1127 18:51:14.639340 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5237db4-a83c-4acb-bbe6-6ae44ece3322-catalog-content\") pod \"certified-operators-8kklg\" (UID: \"c5237db4-a83c-4acb-bbe6-6ae44ece3322\") " pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:14 crc kubenswrapper[4792]: I1127 18:51:14.639742 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5237db4-a83c-4acb-bbe6-6ae44ece3322-catalog-content\") pod \"certified-operators-8kklg\" (UID: \"c5237db4-a83c-4acb-bbe6-6ae44ece3322\") " pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:14 crc kubenswrapper[4792]: I1127 18:51:14.639863 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5237db4-a83c-4acb-bbe6-6ae44ece3322-utilities\") pod \"certified-operators-8kklg\" (UID: \"c5237db4-a83c-4acb-bbe6-6ae44ece3322\") " pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:14 crc kubenswrapper[4792]: I1127 18:51:14.665943 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bspv7\" (UniqueName: \"kubernetes.io/projected/c5237db4-a83c-4acb-bbe6-6ae44ece3322-kube-api-access-bspv7\") pod \"certified-operators-8kklg\" (UID: \"c5237db4-a83c-4acb-bbe6-6ae44ece3322\") " pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:14 crc kubenswrapper[4792]: I1127 18:51:14.814108 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:15 crc kubenswrapper[4792]: I1127 18:51:15.406434 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kklg"] Nov 27 18:51:15 crc kubenswrapper[4792]: I1127 18:51:15.990371 4792 generic.go:334] "Generic (PLEG): container finished" podID="c5237db4-a83c-4acb-bbe6-6ae44ece3322" containerID="83eb724d581e627ed0514f8fb44f40808ad48b857f2eb0fda94d96534f4ad285" exitCode=0 Nov 27 18:51:15 crc kubenswrapper[4792]: I1127 18:51:15.990492 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kklg" event={"ID":"c5237db4-a83c-4acb-bbe6-6ae44ece3322","Type":"ContainerDied","Data":"83eb724d581e627ed0514f8fb44f40808ad48b857f2eb0fda94d96534f4ad285"} Nov 27 18:51:15 crc kubenswrapper[4792]: I1127 18:51:15.990767 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kklg" event={"ID":"c5237db4-a83c-4acb-bbe6-6ae44ece3322","Type":"ContainerStarted","Data":"8e0a983c6304c6e1b9a87115f3023b55fdff995da8372fe2e18a4ac4e18d6ec5"} Nov 27 18:51:17 crc kubenswrapper[4792]: I1127 18:51:17.005155 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kklg" event={"ID":"c5237db4-a83c-4acb-bbe6-6ae44ece3322","Type":"ContainerStarted","Data":"f3229491629f6d42973f3a9e5f6718164f5df1f12561694673936909c1b40616"} Nov 27 18:51:21 crc kubenswrapper[4792]: I1127 18:51:21.068855 4792 generic.go:334] "Generic (PLEG): container finished" podID="c5237db4-a83c-4acb-bbe6-6ae44ece3322" containerID="f3229491629f6d42973f3a9e5f6718164f5df1f12561694673936909c1b40616" exitCode=0 Nov 27 18:51:21 crc kubenswrapper[4792]: I1127 18:51:21.068939 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kklg" event={"ID":"c5237db4-a83c-4acb-bbe6-6ae44ece3322","Type":"ContainerDied","Data":"f3229491629f6d42973f3a9e5f6718164f5df1f12561694673936909c1b40616"} Nov 27 18:51:22 crc kubenswrapper[4792]: I1127 18:51:22.083434 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kklg" event={"ID":"c5237db4-a83c-4acb-bbe6-6ae44ece3322","Type":"ContainerStarted","Data":"06914f8c17be7c4bce4078efe2ff53a928db660aaf0f53695a3f661005b67800"} Nov 27 18:51:22 crc kubenswrapper[4792]: I1127 18:51:22.110175 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8kklg" podStartSLOduration=2.346502514 podStartE2EDuration="8.110145894s" podCreationTimestamp="2025-11-27 18:51:14 +0000 UTC" firstStartedPulling="2025-11-27 18:51:16.001697962 +0000 UTC m=+6098.344524280" lastFinishedPulling="2025-11-27 18:51:21.765341332 +0000 UTC m=+6104.108167660" observedRunningTime="2025-11-27 18:51:22.107201661 +0000 UTC m=+6104.450027999" watchObservedRunningTime="2025-11-27 18:51:22.110145894 +0000 UTC m=+6104.452972212" Nov 27 18:51:24 crc kubenswrapper[4792]: I1127 18:51:24.132927 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b5e40c25-6ce7-4631-9877-a7c983c966f7/aodh-api/0.log" Nov 27 18:51:24 crc kubenswrapper[4792]: I1127 18:51:24.306162 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b5e40c25-6ce7-4631-9877-a7c983c966f7/aodh-evaluator/0.log" Nov 27 18:51:24 crc kubenswrapper[4792]: I1127 18:51:24.323876 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b5e40c25-6ce7-4631-9877-a7c983c966f7/aodh-listener/0.log" Nov 27 18:51:24 crc kubenswrapper[4792]: I1127 18:51:24.397513 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b5e40c25-6ce7-4631-9877-a7c983c966f7/aodh-notifier/0.log" Nov 27 18:51:24 crc kubenswrapper[4792]: I1127 18:51:24.520366 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-65f9f97c5d-544l8_e0a4f95d-c1db-43ea-9d79-185c188a4f9b/barbican-api/0.log" Nov 27 18:51:24 crc kubenswrapper[4792]: I1127 18:51:24.539765 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-65f9f97c5d-544l8_e0a4f95d-c1db-43ea-9d79-185c188a4f9b/barbican-api-log/0.log" Nov 27 18:51:24 crc kubenswrapper[4792]: I1127 18:51:24.774861 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6ff478798d-8s6ww_a8029b26-0d9c-428e-af30-62c262f079f4/barbican-keystone-listener/0.log" Nov 27 18:51:24 crc kubenswrapper[4792]: I1127 18:51:24.815010 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:24 crc kubenswrapper[4792]: I1127 18:51:24.815057 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:24 crc kubenswrapper[4792]: I1127 18:51:24.825836 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b9854ff99-9l5qq_64269486-bbcb-49d2-ab84-0591965b9277/barbican-worker/0.log" Nov 27 18:51:24 crc kubenswrapper[4792]: I1127 18:51:24.925549 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6ff478798d-8s6ww_a8029b26-0d9c-428e-af30-62c262f079f4/barbican-keystone-listener-log/0.log" Nov 27 18:51:24 crc kubenswrapper[4792]: I1127 18:51:24.998388 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b9854ff99-9l5qq_64269486-bbcb-49d2-ab84-0591965b9277/barbican-worker-log/0.log" Nov 27 18:51:25 crc kubenswrapper[4792]: I1127 18:51:25.090424 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g_cfb67295-f5ab-48cb-acae-25420d9d77f4/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:25 crc kubenswrapper[4792]: I1127 18:51:25.308069 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e4f18974-fb5b-4bb2-906b-9f17d1297b04/ceilometer-central-agent/0.log" Nov 27 18:51:25 crc kubenswrapper[4792]: I1127 18:51:25.347269 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e4f18974-fb5b-4bb2-906b-9f17d1297b04/ceilometer-notification-agent/0.log" Nov 27 18:51:25 crc kubenswrapper[4792]: I1127 18:51:25.387527 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e4f18974-fb5b-4bb2-906b-9f17d1297b04/proxy-httpd/0.log" Nov 27 18:51:25 crc kubenswrapper[4792]: I1127 18:51:25.431817 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e4f18974-fb5b-4bb2-906b-9f17d1297b04/sg-core/0.log" Nov 27 18:51:25 crc kubenswrapper[4792]: I1127 18:51:25.584305 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bbc2fff7-567e-4a6d-918a-7f6f430486c1/cinder-api-log/0.log" Nov 27 18:51:25 crc kubenswrapper[4792]: I1127 18:51:25.658670 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bbc2fff7-567e-4a6d-918a-7f6f430486c1/cinder-api/0.log" Nov 27 18:51:25 crc kubenswrapper[4792]: I1127 18:51:25.773258 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_301f71fa-43fd-4005-a753-5127a2e7df97/cinder-scheduler/0.log" Nov 27 18:51:25 crc kubenswrapper[4792]: I1127 18:51:25.842118 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_301f71fa-43fd-4005-a753-5127a2e7df97/probe/0.log" Nov 27 18:51:25 crc kubenswrapper[4792]: I1127 18:51:25.879381 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8kklg" podUID="c5237db4-a83c-4acb-bbe6-6ae44ece3322" containerName="registry-server" probeResult="failure" output=< Nov 27 18:51:25 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:51:25 crc kubenswrapper[4792]: > Nov 27 18:51:25 crc kubenswrapper[4792]: I1127 18:51:25.979405 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq_941f3fd2-382e-4dc2-94f4-39df69607cee/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:26 crc kubenswrapper[4792]: I1127 18:51:26.101565 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r_97d6e3f1-b04e-4f38-b104-3f74f8ed4683/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:26 crc kubenswrapper[4792]: I1127 18:51:26.253550 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-76jbs_ec33e14b-5586-4b5e-a807-396841a63250/init/0.log" Nov 27 18:51:26 crc kubenswrapper[4792]: I1127 18:51:26.637119 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-76jbs_ec33e14b-5586-4b5e-a807-396841a63250/init/0.log" Nov 27 18:51:26 crc kubenswrapper[4792]: I1127 18:51:26.663329 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-76jbs_ec33e14b-5586-4b5e-a807-396841a63250/dnsmasq-dns/0.log" Nov 27 18:51:26 crc kubenswrapper[4792]: I1127 18:51:26.744807 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-87lxc_2b8542bf-b789-4e1a-9ff9-5375dc57cc94/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:26 crc kubenswrapper[4792]: I1127 18:51:26.861580 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4657ccb2-3806-41d0-932d-195b809345fd/glance-httpd/0.log" Nov 27 18:51:26 crc kubenswrapper[4792]: I1127 18:51:26.891136 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4657ccb2-3806-41d0-932d-195b809345fd/glance-log/0.log" Nov 27 18:51:27 crc kubenswrapper[4792]: I1127 18:51:27.097028 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d64731cc-a4fe-498e-9553-4f7f5fce34a2/glance-log/0.log" Nov 27 18:51:27 crc kubenswrapper[4792]: I1127 18:51:27.143386 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d64731cc-a4fe-498e-9553-4f7f5fce34a2/glance-httpd/0.log" Nov 27 18:51:27 crc kubenswrapper[4792]: I1127 18:51:27.683887 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-695947b5db-q2kr8_9603abd5-f9a5-4ace-9d0f-652992d6de1e/heat-engine/0.log" Nov 27 18:51:27 crc kubenswrapper[4792]: I1127 18:51:27.998046 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj_95dfa4b6-84cb-439b-a9ff-fbe5b048973e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:28 crc kubenswrapper[4792]: I1127 18:51:28.022333 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-f54bc59f4-fb7f4_e7c59788-726a-4159-91a6-766cad09ff7d/heat-api/0.log" Nov 27 18:51:28 crc kubenswrapper[4792]: I1127 18:51:28.077423 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-nv2q9_6dbb090d-6543-4de1-80f3-1a61798d7870/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:28 crc kubenswrapper[4792]: I1127 18:51:28.116398 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6cc9cdbfc-zr59q_02e83c23-359e-428f-acab-41d6912a84ab/heat-cfnapi/0.log" Nov 27 18:51:28 crc kubenswrapper[4792]: I1127 18:51:28.355886 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29404441-l4szq_eea31a88-b7d7-4537-bd17-1a9edcaee2d9/keystone-cron/0.log" Nov 27 18:51:28 crc kubenswrapper[4792]: I1127 18:51:28.542533 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_dfcff168-fa89-462b-a1e2-8422c13e0ab3/kube-state-metrics/0.log" Nov 27 18:51:28 crc kubenswrapper[4792]: I1127 18:51:28.667745 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-c92t9_c1228795-b08e-4f02-ac5c-a9bc71058d23/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:28 crc kubenswrapper[4792]: I1127 18:51:28.818177 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6667648786-v844v_f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7/keystone-api/0.log" Nov 27 18:51:28 crc kubenswrapper[4792]: I1127 18:51:28.818211 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-pjp2g_caa13c46-5c39-46bb-a2bb-cfa46caae2b4/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:29 crc kubenswrapper[4792]: I1127 18:51:29.280980 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_0248f4d6-3146-4bb3-85d8-03cdfb42238a/mysqld-exporter/0.log" Nov 27 18:51:29 crc kubenswrapper[4792]: I1127 18:51:29.600841 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk_87368e9c-b9b2-499a-9825-de4ff047aabd/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:29 crc kubenswrapper[4792]: I1127 18:51:29.664142 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fff8f565-9t8rn_1972fb8a-2570-4dd8-8ae1-b3fccf229e4b/neutron-httpd/0.log" Nov 27 18:51:29 crc kubenswrapper[4792]: I1127 18:51:29.665937 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fff8f565-9t8rn_1972fb8a-2570-4dd8-8ae1-b3fccf229e4b/neutron-api/0.log" Nov 27 18:51:30 crc kubenswrapper[4792]: I1127 18:51:30.196115 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9/nova-cell0-conductor-conductor/0.log" Nov 27 18:51:30 crc kubenswrapper[4792]: I1127 18:51:30.575267 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3c9b4c85-0700-45e9-b663-ca02ecf5009d/nova-cell1-conductor-conductor/0.log" Nov 27 18:51:30 crc kubenswrapper[4792]: I1127 18:51:30.587895 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7cd1499d-a3bb-449a-85d6-fcb81e3b43ee/nova-api-log/0.log" Nov 27 18:51:30 crc kubenswrapper[4792]: I1127 18:51:30.833684 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd/nova-cell1-novncproxy-novncproxy/0.log" Nov 27 18:51:30 crc kubenswrapper[4792]: I1127 18:51:30.857581 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-fqdvn_83d3f635-5c64-4827-a54d-1b21ca1b6570/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:31 crc kubenswrapper[4792]: I1127 18:51:31.065290 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7cd1499d-a3bb-449a-85d6-fcb81e3b43ee/nova-api-api/0.log" Nov 27 18:51:31 crc kubenswrapper[4792]: I1127 18:51:31.156243 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_76bd9753-9395-4ae1-a0c5-10c1ee3f0347/nova-metadata-log/0.log" Nov 27 18:51:31 crc kubenswrapper[4792]: I1127 18:51:31.477117 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_09e9c959-b479-4008-8042-ffa78bb38460/nova-scheduler-scheduler/0.log" Nov 27 18:51:31 crc kubenswrapper[4792]: I1127 18:51:31.551574 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b/mysql-bootstrap/0.log" Nov 27 18:51:31 crc kubenswrapper[4792]: I1127 18:51:31.667138 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b/mysql-bootstrap/0.log" Nov 27 18:51:31 crc kubenswrapper[4792]: I1127 18:51:31.725571 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b/galera/0.log" Nov 27 18:51:31 crc kubenswrapper[4792]: I1127 18:51:31.891691 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8ed6358b-2030-436d-a847-724a53f802ea/mysql-bootstrap/0.log" Nov 27 18:51:32 crc kubenswrapper[4792]: I1127 18:51:32.086406 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8ed6358b-2030-436d-a847-724a53f802ea/mysql-bootstrap/0.log" Nov 27 18:51:32 crc kubenswrapper[4792]: I1127 18:51:32.145543 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8ed6358b-2030-436d-a847-724a53f802ea/galera/0.log" Nov 27 18:51:32 crc kubenswrapper[4792]: I1127 18:51:32.252935 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_e88dd573-027f-458e-81ed-c133e141afb6/openstackclient/0.log" Nov 27 18:51:32 crc kubenswrapper[4792]: I1127 18:51:32.352109 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-z7zwq_88e47e4b-d7fb-4dfc-8352-9705403282a6/openstack-network-exporter/0.log" Nov 27 18:51:32 crc kubenswrapper[4792]: I1127 18:51:32.547379 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-n2t2x_8445903e-bdf0-4581-a2ce-728410f878ac/ovn-controller/0.log" Nov 27 18:51:32 crc kubenswrapper[4792]: I1127 18:51:32.753031 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dzgvb_a82ac7ae-1443-4fbc-a8bb-2383c148b809/ovsdb-server-init/0.log" Nov 27 18:51:32 crc kubenswrapper[4792]: I1127 18:51:32.899342 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dzgvb_a82ac7ae-1443-4fbc-a8bb-2383c148b809/ovs-vswitchd/0.log" Nov 27 18:51:32 crc kubenswrapper[4792]: I1127 18:51:32.907075 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dzgvb_a82ac7ae-1443-4fbc-a8bb-2383c148b809/ovsdb-server-init/0.log" Nov 27 18:51:32 crc kubenswrapper[4792]: I1127 18:51:32.980855 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dzgvb_a82ac7ae-1443-4fbc-a8bb-2383c148b809/ovsdb-server/0.log" Nov 27 18:51:33 crc kubenswrapper[4792]: I1127 18:51:33.194745 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-jn9hq_a8b213a4-d6e2-4ed9-b67b-625fab313079/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:33 crc kubenswrapper[4792]: I1127 18:51:33.342515 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_446f7473-cc3f-42b6-931c-eb1747df2c73/openstack-network-exporter/0.log" Nov 27 18:51:33 crc kubenswrapper[4792]: I1127 18:51:33.398883 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_446f7473-cc3f-42b6-931c-eb1747df2c73/ovn-northd/0.log" Nov 27 18:51:33 crc kubenswrapper[4792]: I1127 18:51:33.483902 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_76bd9753-9395-4ae1-a0c5-10c1ee3f0347/nova-metadata-metadata/0.log" Nov 27 18:51:33 crc kubenswrapper[4792]: I1127 18:51:33.646388 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3b7e5347-cd23-498f-ac14-95ce8f106b97/openstack-network-exporter/0.log" Nov 27 18:51:33 crc kubenswrapper[4792]: I1127 18:51:33.665424 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3b7e5347-cd23-498f-ac14-95ce8f106b97/ovsdbserver-nb/0.log" Nov 27 18:51:33 crc kubenswrapper[4792]: I1127 18:51:33.835457 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5/openstack-network-exporter/0.log" Nov 27 18:51:33 crc kubenswrapper[4792]: I1127 18:51:33.911826 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5/ovsdbserver-sb/0.log" Nov 27 18:51:34 crc kubenswrapper[4792]: I1127 18:51:34.156505 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-68b5cd97dd-hfxs2_26b3eb4f-347e-4da5-8da9-56f7620f43a8/placement-api/0.log" Nov 27 18:51:34 crc kubenswrapper[4792]: I1127 18:51:34.240114 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4ad7f090-9b35-4a85-86d5-1763f234a768/init-config-reloader/0.log" Nov 27 18:51:34 crc kubenswrapper[4792]: I1127 18:51:34.245054 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-68b5cd97dd-hfxs2_26b3eb4f-347e-4da5-8da9-56f7620f43a8/placement-log/0.log" Nov 27 18:51:34 crc kubenswrapper[4792]: I1127 18:51:34.436639 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4ad7f090-9b35-4a85-86d5-1763f234a768/init-config-reloader/0.log" Nov 27 18:51:34 crc kubenswrapper[4792]: I1127 18:51:34.461129 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4ad7f090-9b35-4a85-86d5-1763f234a768/config-reloader/0.log" Nov 27 18:51:34 crc kubenswrapper[4792]: I1127 18:51:34.478286 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4ad7f090-9b35-4a85-86d5-1763f234a768/thanos-sidecar/0.log" Nov 27 18:51:34 crc kubenswrapper[4792]: I1127 18:51:34.503185 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4ad7f090-9b35-4a85-86d5-1763f234a768/prometheus/0.log" Nov 27 18:51:34 crc kubenswrapper[4792]: I1127 18:51:34.714071 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_73468e89-af69-44aa-bc4d-66c7e34a8dff/setup-container/0.log" Nov 27 18:51:34 crc kubenswrapper[4792]: I1127 18:51:34.951790 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_73468e89-af69-44aa-bc4d-66c7e34a8dff/setup-container/0.log" Nov 27 18:51:34 crc kubenswrapper[4792]: I1127 18:51:34.968410 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_73468e89-af69-44aa-bc4d-66c7e34a8dff/rabbitmq/0.log" Nov 27 18:51:34 crc kubenswrapper[4792]: I1127 18:51:34.976008 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6d2993a9-7994-4249-bfd1-acc7b734eb16/setup-container/0.log" Nov 27 18:51:35 crc kubenswrapper[4792]: I1127 18:51:35.276580 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6d2993a9-7994-4249-bfd1-acc7b734eb16/setup-container/0.log" Nov 27 18:51:35 crc kubenswrapper[4792]: I1127 18:51:35.299608 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6d2993a9-7994-4249-bfd1-acc7b734eb16/rabbitmq/0.log" Nov 27 18:51:35 crc kubenswrapper[4792]: I1127 18:51:35.304828 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz_20b3860e-a914-42cd-b2e7-35ab54507a89/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:35 crc kubenswrapper[4792]: I1127 18:51:35.526728 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-mdb54_70c3419b-b42e-42f5-be83-4de5d0e38566/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:35 crc kubenswrapper[4792]: I1127 18:51:35.589256 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj_8de61141-d67f-4491-ade9-57da76c018e7/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:35 crc kubenswrapper[4792]: I1127 18:51:35.752088 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-28hzm_1c6f6f25-0120-4355-9803-5e7b6743588b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:35 crc kubenswrapper[4792]: I1127 18:51:35.853547 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-hjr4g_72c0a753-4805-42a5-9b41-4fc97aad561b/ssh-known-hosts-edpm-deployment/0.log" Nov 27 18:51:35 crc kubenswrapper[4792]: I1127 18:51:35.872856 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8kklg" podUID="c5237db4-a83c-4acb-bbe6-6ae44ece3322" containerName="registry-server" probeResult="failure" output=< Nov 27 18:51:35 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:51:35 crc kubenswrapper[4792]: > Nov 27 18:51:36 crc kubenswrapper[4792]: I1127 18:51:36.110971 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55455cb8cf-gtjxc_9ace987a-3f62-48ce-8c4b-b9c50cd2a29e/proxy-server/0.log" Nov 27 18:51:36 crc kubenswrapper[4792]: I1127 18:51:36.249989 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-2n56v_66a7953b-06d4-453f-801c-4873d0d43c7a/swift-ring-rebalance/0.log" Nov 27 18:51:36 crc kubenswrapper[4792]: I1127 18:51:36.332259 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55455cb8cf-gtjxc_9ace987a-3f62-48ce-8c4b-b9c50cd2a29e/proxy-httpd/0.log" Nov 27 18:51:36 crc kubenswrapper[4792]: I1127 18:51:36.425402 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/account-auditor/0.log" Nov 27 18:51:36 crc kubenswrapper[4792]: I1127 18:51:36.483970 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/account-reaper/0.log" Nov 27 18:51:36 crc kubenswrapper[4792]: I1127 18:51:36.627864 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/account-replicator/0.log" Nov 27 18:51:36 crc kubenswrapper[4792]: I1127 18:51:36.642521 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/account-server/0.log" Nov 27 18:51:36 crc kubenswrapper[4792]: I1127 18:51:36.644012 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/container-auditor/0.log" Nov 27 18:51:36 crc kubenswrapper[4792]: I1127 18:51:36.798424 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/container-replicator/0.log" Nov 27 18:51:36 crc kubenswrapper[4792]: I1127 18:51:36.853021 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/container-server/0.log" Nov 27 18:51:36 crc kubenswrapper[4792]: I1127 18:51:36.879023 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/object-auditor/0.log" Nov 27 18:51:36 crc kubenswrapper[4792]: I1127 18:51:36.932699 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/container-updater/0.log" Nov 27 18:51:37 crc kubenswrapper[4792]: I1127 18:51:37.001617 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/object-expirer/0.log" Nov 27 18:51:37 crc kubenswrapper[4792]: I1127 18:51:37.076110 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/object-server/0.log" Nov 27 18:51:37 crc kubenswrapper[4792]: I1127 18:51:37.139420 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/object-replicator/0.log" Nov 27 18:51:37 crc kubenswrapper[4792]: I1127 18:51:37.185148 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/object-updater/0.log" Nov 27 18:51:37 crc kubenswrapper[4792]: I1127 18:51:37.216899 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/rsync/0.log" Nov 27 18:51:37 crc kubenswrapper[4792]: I1127 18:51:37.319614 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/swift-recon-cron/0.log" Nov 27 18:51:37 crc kubenswrapper[4792]: I1127 18:51:37.443405 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z_8bfd070a-8c21-4c11-b794-c5410285a701/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:37 crc kubenswrapper[4792]: I1127 18:51:37.604751 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl_78842a98-31e3-4f0b-8f35-6b8a1856a994/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:37 crc kubenswrapper[4792]: I1127 18:51:37.858787 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_858cefc2-c01c-428d-852b-f3599dda658b/test-operator-logs-container/0.log" Nov 27 18:51:37 crc kubenswrapper[4792]: I1127 18:51:37.986036 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9thzs_4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 18:51:38 crc kubenswrapper[4792]: I1127 18:51:38.293222 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:51:38 crc kubenswrapper[4792]: I1127 18:51:38.295805 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:51:38 crc kubenswrapper[4792]: I1127 18:51:38.959450 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_57348a1d-d6f9-4844-894d-b837afec3bdc/tempest-tests-tempest-tests-runner/0.log" Nov 27 18:51:43 crc kubenswrapper[4792]: I1127 18:51:43.523012 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666/memcached/0.log" Nov 27 18:51:44 crc kubenswrapper[4792]: I1127 18:51:44.872042 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:44 crc kubenswrapper[4792]: I1127 18:51:44.925374 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:45 crc kubenswrapper[4792]: I1127 18:51:45.613623 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8kklg"] Nov 27 18:51:46 crc kubenswrapper[4792]: I1127 18:51:46.388432 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8kklg" podUID="c5237db4-a83c-4acb-bbe6-6ae44ece3322" containerName="registry-server" containerID="cri-o://06914f8c17be7c4bce4078efe2ff53a928db660aaf0f53695a3f661005b67800" gracePeriod=2 Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.247268 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.402054 4792 generic.go:334] "Generic (PLEG): container finished" podID="c5237db4-a83c-4acb-bbe6-6ae44ece3322" containerID="06914f8c17be7c4bce4078efe2ff53a928db660aaf0f53695a3f661005b67800" exitCode=0 Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.402101 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kklg" event={"ID":"c5237db4-a83c-4acb-bbe6-6ae44ece3322","Type":"ContainerDied","Data":"06914f8c17be7c4bce4078efe2ff53a928db660aaf0f53695a3f661005b67800"} Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.402134 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kklg" event={"ID":"c5237db4-a83c-4acb-bbe6-6ae44ece3322","Type":"ContainerDied","Data":"8e0a983c6304c6e1b9a87115f3023b55fdff995da8372fe2e18a4ac4e18d6ec5"} Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.402159 4792 scope.go:117] "RemoveContainer" containerID="06914f8c17be7c4bce4078efe2ff53a928db660aaf0f53695a3f661005b67800" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.402343 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kklg" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.423179 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5237db4-a83c-4acb-bbe6-6ae44ece3322-catalog-content\") pod \"c5237db4-a83c-4acb-bbe6-6ae44ece3322\" (UID: \"c5237db4-a83c-4acb-bbe6-6ae44ece3322\") " Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.423248 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5237db4-a83c-4acb-bbe6-6ae44ece3322-utilities\") pod \"c5237db4-a83c-4acb-bbe6-6ae44ece3322\" (UID: \"c5237db4-a83c-4acb-bbe6-6ae44ece3322\") " Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.423393 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bspv7\" (UniqueName: \"kubernetes.io/projected/c5237db4-a83c-4acb-bbe6-6ae44ece3322-kube-api-access-bspv7\") pod \"c5237db4-a83c-4acb-bbe6-6ae44ece3322\" (UID: \"c5237db4-a83c-4acb-bbe6-6ae44ece3322\") " Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.423929 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5237db4-a83c-4acb-bbe6-6ae44ece3322-utilities" (OuterVolumeSpecName: "utilities") pod "c5237db4-a83c-4acb-bbe6-6ae44ece3322" (UID: "c5237db4-a83c-4acb-bbe6-6ae44ece3322"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.424673 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5237db4-a83c-4acb-bbe6-6ae44ece3322-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.435426 4792 scope.go:117] "RemoveContainer" containerID="f3229491629f6d42973f3a9e5f6718164f5df1f12561694673936909c1b40616" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.459863 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5237db4-a83c-4acb-bbe6-6ae44ece3322-kube-api-access-bspv7" (OuterVolumeSpecName: "kube-api-access-bspv7") pod "c5237db4-a83c-4acb-bbe6-6ae44ece3322" (UID: "c5237db4-a83c-4acb-bbe6-6ae44ece3322"). InnerVolumeSpecName "kube-api-access-bspv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.505438 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5237db4-a83c-4acb-bbe6-6ae44ece3322-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5237db4-a83c-4acb-bbe6-6ae44ece3322" (UID: "c5237db4-a83c-4acb-bbe6-6ae44ece3322"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.526375 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bspv7\" (UniqueName: \"kubernetes.io/projected/c5237db4-a83c-4acb-bbe6-6ae44ece3322-kube-api-access-bspv7\") on node \"crc\" DevicePath \"\"" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.526599 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5237db4-a83c-4acb-bbe6-6ae44ece3322-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.538296 4792 scope.go:117] "RemoveContainer" containerID="83eb724d581e627ed0514f8fb44f40808ad48b857f2eb0fda94d96534f4ad285" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.594896 4792 scope.go:117] "RemoveContainer" containerID="06914f8c17be7c4bce4078efe2ff53a928db660aaf0f53695a3f661005b67800" Nov 27 18:51:47 crc kubenswrapper[4792]: E1127 18:51:47.595449 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06914f8c17be7c4bce4078efe2ff53a928db660aaf0f53695a3f661005b67800\": container with ID starting with 06914f8c17be7c4bce4078efe2ff53a928db660aaf0f53695a3f661005b67800 not found: ID does not exist" containerID="06914f8c17be7c4bce4078efe2ff53a928db660aaf0f53695a3f661005b67800" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.595704 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06914f8c17be7c4bce4078efe2ff53a928db660aaf0f53695a3f661005b67800"} err="failed to get container status \"06914f8c17be7c4bce4078efe2ff53a928db660aaf0f53695a3f661005b67800\": rpc error: code = NotFound desc = could not find container \"06914f8c17be7c4bce4078efe2ff53a928db660aaf0f53695a3f661005b67800\": container with ID starting with 06914f8c17be7c4bce4078efe2ff53a928db660aaf0f53695a3f661005b67800 not found: ID does not exist" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.595832 4792 scope.go:117] "RemoveContainer" containerID="f3229491629f6d42973f3a9e5f6718164f5df1f12561694673936909c1b40616" Nov 27 18:51:47 crc kubenswrapper[4792]: E1127 18:51:47.596396 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3229491629f6d42973f3a9e5f6718164f5df1f12561694673936909c1b40616\": container with ID starting with f3229491629f6d42973f3a9e5f6718164f5df1f12561694673936909c1b40616 not found: ID does not exist" containerID="f3229491629f6d42973f3a9e5f6718164f5df1f12561694673936909c1b40616" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.596435 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3229491629f6d42973f3a9e5f6718164f5df1f12561694673936909c1b40616"} err="failed to get container status \"f3229491629f6d42973f3a9e5f6718164f5df1f12561694673936909c1b40616\": rpc error: code = NotFound desc = could not find container \"f3229491629f6d42973f3a9e5f6718164f5df1f12561694673936909c1b40616\": container with ID starting with f3229491629f6d42973f3a9e5f6718164f5df1f12561694673936909c1b40616 not found: ID does not exist" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.596462 4792 scope.go:117] "RemoveContainer" containerID="83eb724d581e627ed0514f8fb44f40808ad48b857f2eb0fda94d96534f4ad285" Nov 27 18:51:47 crc kubenswrapper[4792]: E1127 18:51:47.596768 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83eb724d581e627ed0514f8fb44f40808ad48b857f2eb0fda94d96534f4ad285\": container with ID starting with 83eb724d581e627ed0514f8fb44f40808ad48b857f2eb0fda94d96534f4ad285 not found: ID does not exist" containerID="83eb724d581e627ed0514f8fb44f40808ad48b857f2eb0fda94d96534f4ad285" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.597433 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83eb724d581e627ed0514f8fb44f40808ad48b857f2eb0fda94d96534f4ad285"} err="failed to get container status \"83eb724d581e627ed0514f8fb44f40808ad48b857f2eb0fda94d96534f4ad285\": rpc error: code = NotFound desc = could not find container \"83eb724d581e627ed0514f8fb44f40808ad48b857f2eb0fda94d96534f4ad285\": container with ID starting with 83eb724d581e627ed0514f8fb44f40808ad48b857f2eb0fda94d96534f4ad285 not found: ID does not exist" Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.754984 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8kklg"] Nov 27 18:51:47 crc kubenswrapper[4792]: I1127 18:51:47.768214 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8kklg"] Nov 27 18:51:48 crc kubenswrapper[4792]: I1127 18:51:48.710317 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5237db4-a83c-4acb-bbe6-6ae44ece3322" path="/var/lib/kubelet/pods/c5237db4-a83c-4acb-bbe6-6ae44ece3322/volumes" Nov 27 18:52:06 crc kubenswrapper[4792]: I1127 18:52:06.825853 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k_818171bc-2f19-4297-92b9-a01e361b6387/util/0.log" Nov 27 18:52:07 crc kubenswrapper[4792]: I1127 18:52:07.031209 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k_818171bc-2f19-4297-92b9-a01e361b6387/util/0.log" Nov 27 18:52:07 crc kubenswrapper[4792]: I1127 18:52:07.045938 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k_818171bc-2f19-4297-92b9-a01e361b6387/pull/0.log" Nov 27 18:52:07 crc kubenswrapper[4792]: I1127 18:52:07.103554 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k_818171bc-2f19-4297-92b9-a01e361b6387/pull/0.log" Nov 27 18:52:07 crc kubenswrapper[4792]: I1127 18:52:07.325253 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k_818171bc-2f19-4297-92b9-a01e361b6387/pull/0.log" Nov 27 18:52:07 crc kubenswrapper[4792]: I1127 18:52:07.328405 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k_818171bc-2f19-4297-92b9-a01e361b6387/extract/0.log" Nov 27 18:52:07 crc kubenswrapper[4792]: I1127 18:52:07.369671 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k_818171bc-2f19-4297-92b9-a01e361b6387/util/0.log" Nov 27 18:52:07 crc kubenswrapper[4792]: I1127 18:52:07.526076 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-8njms_880e84df-6b95-4c8d-8b4c-146f26d99098/kube-rbac-proxy/0.log" Nov 27 18:52:07 crc kubenswrapper[4792]: I1127 18:52:07.573687 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-z7fm9_db57e7fa-0523-4a09-91a0-371fe08e5052/kube-rbac-proxy/0.log" Nov 27 18:52:07 crc kubenswrapper[4792]: I1127 18:52:07.631887 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-8njms_880e84df-6b95-4c8d-8b4c-146f26d99098/manager/0.log" Nov 27 18:52:07 crc kubenswrapper[4792]: I1127 18:52:07.785089 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-z7fm9_db57e7fa-0523-4a09-91a0-371fe08e5052/manager/0.log" Nov 27 18:52:07 crc kubenswrapper[4792]: I1127 18:52:07.908780 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-xwttv_f6b3f84c-a691-43dc-b9bc-f5bd2fcafb79/manager/0.log" Nov 27 18:52:07 crc kubenswrapper[4792]: I1127 18:52:07.933073 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-xwttv_f6b3f84c-a691-43dc-b9bc-f5bd2fcafb79/kube-rbac-proxy/0.log" Nov 27 18:52:08 crc kubenswrapper[4792]: I1127 18:52:08.182545 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-hkks9_ad88e4ad-7c33-4dac-85ed-54e7f69d8625/manager/0.log" Nov 27 18:52:08 crc kubenswrapper[4792]: I1127 18:52:08.186476 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-hkks9_ad88e4ad-7c33-4dac-85ed-54e7f69d8625/kube-rbac-proxy/0.log" Nov 27 18:52:08 crc kubenswrapper[4792]: I1127 18:52:08.291533 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:52:08 crc kubenswrapper[4792]: I1127 18:52:08.291592 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:52:08 crc kubenswrapper[4792]: I1127 18:52:08.434234 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-gqmrh_04aba733-246c-4169-b91d-c7708aea6a71/kube-rbac-proxy/0.log" Nov 27 18:52:08 crc kubenswrapper[4792]: I1127 18:52:08.656240 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-gqmrh_04aba733-246c-4169-b91d-c7708aea6a71/manager/0.log" Nov 27 18:52:08 crc kubenswrapper[4792]: I1127 18:52:08.687070 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-qs7wq_d29cd75e-9782-4f90-b9cf-95329e101cbb/kube-rbac-proxy/0.log" Nov 27 18:52:08 crc kubenswrapper[4792]: I1127 18:52:08.720234 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-qs7wq_d29cd75e-9782-4f90-b9cf-95329e101cbb/manager/0.log" Nov 27 18:52:08 crc kubenswrapper[4792]: I1127 18:52:08.879208 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-zklgd_94d0c824-194b-4d52-ba80-1cc08301a196/kube-rbac-proxy/0.log" Nov 27 18:52:09 crc kubenswrapper[4792]: I1127 18:52:09.056026 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-zklgd_94d0c824-194b-4d52-ba80-1cc08301a196/manager/0.log" Nov 27 18:52:09 crc kubenswrapper[4792]: I1127 18:52:09.064510 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-7gmv4_5e49917e-d729-4661-a604-a603f9a8cca7/kube-rbac-proxy/0.log" Nov 27 18:52:09 crc kubenswrapper[4792]: I1127 18:52:09.193062 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-7gmv4_5e49917e-d729-4661-a604-a603f9a8cca7/manager/0.log" Nov 27 18:52:09 crc kubenswrapper[4792]: I1127 18:52:09.270298 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-v7kfm_fd4b3618-80a1-4d23-8faa-57c206b08cf6/kube-rbac-proxy/0.log" Nov 27 18:52:09 crc kubenswrapper[4792]: I1127 18:52:09.406802 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-dnsbx_652cb29e-91a9-433f-9002-c850a78cb8a4/kube-rbac-proxy/0.log" Nov 27 18:52:09 crc kubenswrapper[4792]: I1127 18:52:09.426950 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-v7kfm_fd4b3618-80a1-4d23-8faa-57c206b08cf6/manager/0.log" Nov 27 18:52:09 crc kubenswrapper[4792]: I1127 18:52:09.487337 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-dnsbx_652cb29e-91a9-433f-9002-c850a78cb8a4/manager/0.log" Nov 27 18:52:09 crc kubenswrapper[4792]: I1127 18:52:09.638389 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-lp4kf_afd4a5dc-d971-4eeb-8272-0ead1e9b4274/kube-rbac-proxy/0.log" Nov 27 18:52:09 crc kubenswrapper[4792]: I1127 18:52:09.644442 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-lp4kf_afd4a5dc-d971-4eeb-8272-0ead1e9b4274/manager/0.log" Nov 27 18:52:09 crc kubenswrapper[4792]: I1127 18:52:09.870941 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-bmtbl_20dd117f-6517-4b59-855d-a0f9d08409a2/kube-rbac-proxy/0.log" Nov 27 18:52:09 crc kubenswrapper[4792]: I1127 18:52:09.940005 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-bmtbl_20dd117f-6517-4b59-855d-a0f9d08409a2/manager/0.log" Nov 27 18:52:09 crc kubenswrapper[4792]: I1127 18:52:09.991388 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-2lfqp_2fc2a1fd-6c7e-4d26-801b-5cac891fba51/kube-rbac-proxy/0.log" Nov 27 18:52:10 crc kubenswrapper[4792]: I1127 18:52:10.169693 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-9qhqv_969e1197-2aaa-42c9-b56e-7af3ef24e205/kube-rbac-proxy/0.log" Nov 27 18:52:10 crc kubenswrapper[4792]: I1127 18:52:10.249512 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-2lfqp_2fc2a1fd-6c7e-4d26-801b-5cac891fba51/manager/0.log" Nov 27 18:52:10 crc kubenswrapper[4792]: I1127 18:52:10.255939 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-9qhqv_969e1197-2aaa-42c9-b56e-7af3ef24e205/manager/0.log" Nov 27 18:52:10 crc kubenswrapper[4792]: I1127 18:52:10.420491 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2_55002036-a4a7-469c-93be-e4483f455a4c/kube-rbac-proxy/0.log" Nov 27 18:52:10 crc kubenswrapper[4792]: I1127 18:52:10.493096 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2_55002036-a4a7-469c-93be-e4483f455a4c/manager/0.log" Nov 27 18:52:10 crc kubenswrapper[4792]: I1127 18:52:10.884843 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-b44dff85c-lpx9d_d8282ec7-1375-403d-b679-d7e372e07f6f/operator/0.log" Nov 27 18:52:10 crc kubenswrapper[4792]: I1127 18:52:10.885908 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-n6smr_ad9fe5d7-1539-4597-b2b7-5fc5cf555264/registry-server/0.log" Nov 27 18:52:11 crc kubenswrapper[4792]: I1127 18:52:11.055717 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-z5rhr_57052bd6-e7c2-4ea0-bc6e-839ed4803541/kube-rbac-proxy/0.log" Nov 27 18:52:11 crc kubenswrapper[4792]: I1127 18:52:11.199773 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-z5rhr_57052bd6-e7c2-4ea0-bc6e-839ed4803541/manager/0.log" Nov 27 18:52:11 crc kubenswrapper[4792]: I1127 18:52:11.255068 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-fqb9p_cf051bf3-d415-40eb-8071-8f0509377c34/kube-rbac-proxy/0.log" Nov 27 18:52:11 crc kubenswrapper[4792]: I1127 18:52:11.358163 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-fqb9p_cf051bf3-d415-40eb-8071-8f0509377c34/manager/0.log" Nov 27 18:52:11 crc kubenswrapper[4792]: I1127 18:52:11.557777 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-q87xm_4d308a9c-7874-457f-a97f-4bb784a11783/operator/0.log" Nov 27 18:52:11 crc kubenswrapper[4792]: I1127 18:52:11.625385 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-75zc9_074f9cbe-fb30-4e1f-9156-ccc5100dcd3b/kube-rbac-proxy/0.log" Nov 27 18:52:11 crc kubenswrapper[4792]: I1127 18:52:11.728485 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-75zc9_074f9cbe-fb30-4e1f-9156-ccc5100dcd3b/manager/0.log" Nov 27 18:52:11 crc kubenswrapper[4792]: I1127 18:52:11.786927 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-ff79b6df5-jrwv2_4193b9b8-da59-42cf-94b2-a327608c59a6/kube-rbac-proxy/0.log" Nov 27 18:52:12 crc kubenswrapper[4792]: I1127 18:52:12.080427 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-8kj9s_de56fbe3-d4c6-430f-8b94-5136fbf4a79c/manager/0.log" Nov 27 18:52:12 crc kubenswrapper[4792]: I1127 18:52:12.094334 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-8kj9s_de56fbe3-d4c6-430f-8b94-5136fbf4a79c/kube-rbac-proxy/0.log" Nov 27 18:52:12 crc kubenswrapper[4792]: I1127 18:52:12.275703 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-bvh8l_6f57d750-e016-4d78-bdbe-b9b1c5a21787/kube-rbac-proxy/0.log" Nov 27 18:52:12 crc kubenswrapper[4792]: I1127 18:52:12.287903 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-ff79b6df5-jrwv2_4193b9b8-da59-42cf-94b2-a327608c59a6/manager/0.log" Nov 27 18:52:12 crc kubenswrapper[4792]: I1127 18:52:12.375855 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-bvh8l_6f57d750-e016-4d78-bdbe-b9b1c5a21787/manager/0.log" Nov 27 18:52:12 crc kubenswrapper[4792]: I1127 18:52:12.381344 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6644d5b8df-w6zdt_f1ef7f3c-052e-45e2-a51a-5d114d634c12/manager/0.log" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:21.980808 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5t7f8"] Nov 27 18:52:22 crc kubenswrapper[4792]: E1127 18:52:21.982136 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5237db4-a83c-4acb-bbe6-6ae44ece3322" containerName="extract-utilities" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:21.982156 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5237db4-a83c-4acb-bbe6-6ae44ece3322" containerName="extract-utilities" Nov 27 18:52:22 crc kubenswrapper[4792]: E1127 18:52:21.982174 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5237db4-a83c-4acb-bbe6-6ae44ece3322" containerName="registry-server" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:21.982182 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5237db4-a83c-4acb-bbe6-6ae44ece3322" containerName="registry-server" Nov 27 18:52:22 crc kubenswrapper[4792]: E1127 18:52:21.982238 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5237db4-a83c-4acb-bbe6-6ae44ece3322" containerName="extract-content" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:21.982248 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5237db4-a83c-4acb-bbe6-6ae44ece3322" containerName="extract-content" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:21.982523 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5237db4-a83c-4acb-bbe6-6ae44ece3322" containerName="registry-server" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:21.984449 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:22.062433 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5t7f8"] Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:22.090494 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwqbt\" (UniqueName: \"kubernetes.io/projected/67ad7e58-369e-46e3-8ea0-7802b46be7d3-kube-api-access-cwqbt\") pod \"redhat-operators-5t7f8\" (UID: \"67ad7e58-369e-46e3-8ea0-7802b46be7d3\") " pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:22.090590 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ad7e58-369e-46e3-8ea0-7802b46be7d3-catalog-content\") pod \"redhat-operators-5t7f8\" (UID: \"67ad7e58-369e-46e3-8ea0-7802b46be7d3\") " pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:22.090687 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ad7e58-369e-46e3-8ea0-7802b46be7d3-utilities\") pod \"redhat-operators-5t7f8\" (UID: \"67ad7e58-369e-46e3-8ea0-7802b46be7d3\") " pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:22.192754 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwqbt\" (UniqueName: \"kubernetes.io/projected/67ad7e58-369e-46e3-8ea0-7802b46be7d3-kube-api-access-cwqbt\") pod \"redhat-operators-5t7f8\" (UID: \"67ad7e58-369e-46e3-8ea0-7802b46be7d3\") " pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:22.193049 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ad7e58-369e-46e3-8ea0-7802b46be7d3-catalog-content\") pod \"redhat-operators-5t7f8\" (UID: \"67ad7e58-369e-46e3-8ea0-7802b46be7d3\") " pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:22.193083 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ad7e58-369e-46e3-8ea0-7802b46be7d3-utilities\") pod \"redhat-operators-5t7f8\" (UID: \"67ad7e58-369e-46e3-8ea0-7802b46be7d3\") " pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:22.193681 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ad7e58-369e-46e3-8ea0-7802b46be7d3-catalog-content\") pod \"redhat-operators-5t7f8\" (UID: \"67ad7e58-369e-46e3-8ea0-7802b46be7d3\") " pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:22.193698 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ad7e58-369e-46e3-8ea0-7802b46be7d3-utilities\") pod \"redhat-operators-5t7f8\" (UID: \"67ad7e58-369e-46e3-8ea0-7802b46be7d3\") " pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:22.221313 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwqbt\" (UniqueName: \"kubernetes.io/projected/67ad7e58-369e-46e3-8ea0-7802b46be7d3-kube-api-access-cwqbt\") pod \"redhat-operators-5t7f8\" (UID: \"67ad7e58-369e-46e3-8ea0-7802b46be7d3\") " pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:52:22 crc kubenswrapper[4792]: I1127 18:52:22.329181 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:52:23 crc kubenswrapper[4792]: I1127 18:52:23.109466 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5t7f8"] Nov 27 18:52:23 crc kubenswrapper[4792]: I1127 18:52:23.841088 4792 generic.go:334] "Generic (PLEG): container finished" podID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" containerID="db595747813bd9f9aa90408843d422a537d991fcc4a1ebea7d2bc3d3aa0170f5" exitCode=0 Nov 27 18:52:23 crc kubenswrapper[4792]: I1127 18:52:23.841189 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5t7f8" event={"ID":"67ad7e58-369e-46e3-8ea0-7802b46be7d3","Type":"ContainerDied","Data":"db595747813bd9f9aa90408843d422a537d991fcc4a1ebea7d2bc3d3aa0170f5"} Nov 27 18:52:23 crc kubenswrapper[4792]: I1127 18:52:23.841631 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5t7f8" event={"ID":"67ad7e58-369e-46e3-8ea0-7802b46be7d3","Type":"ContainerStarted","Data":"98946d0da4b25a2fda88667be83204868759f00c4f64cb5fc66b55dc17e9a5ca"} Nov 27 18:52:24 crc kubenswrapper[4792]: I1127 18:52:24.586390 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qc7xw"] Nov 27 18:52:24 crc kubenswrapper[4792]: I1127 18:52:24.589974 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:24 crc kubenswrapper[4792]: I1127 18:52:24.637700 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qc7xw"] Nov 27 18:52:24 crc kubenswrapper[4792]: I1127 18:52:24.665190 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f319d22f-3fd5-4377-87a2-13a3a8d2abab-catalog-content\") pod \"redhat-marketplace-qc7xw\" (UID: \"f319d22f-3fd5-4377-87a2-13a3a8d2abab\") " pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:24 crc kubenswrapper[4792]: I1127 18:52:24.665275 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f319d22f-3fd5-4377-87a2-13a3a8d2abab-utilities\") pod \"redhat-marketplace-qc7xw\" (UID: \"f319d22f-3fd5-4377-87a2-13a3a8d2abab\") " pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:24 crc kubenswrapper[4792]: I1127 18:52:24.665385 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2m7g\" (UniqueName: \"kubernetes.io/projected/f319d22f-3fd5-4377-87a2-13a3a8d2abab-kube-api-access-n2m7g\") pod \"redhat-marketplace-qc7xw\" (UID: \"f319d22f-3fd5-4377-87a2-13a3a8d2abab\") " pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:24 crc kubenswrapper[4792]: I1127 18:52:24.768307 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2m7g\" (UniqueName: \"kubernetes.io/projected/f319d22f-3fd5-4377-87a2-13a3a8d2abab-kube-api-access-n2m7g\") pod \"redhat-marketplace-qc7xw\" (UID: \"f319d22f-3fd5-4377-87a2-13a3a8d2abab\") " pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:24 crc kubenswrapper[4792]: I1127 18:52:24.768600 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f319d22f-3fd5-4377-87a2-13a3a8d2abab-catalog-content\") pod \"redhat-marketplace-qc7xw\" (UID: \"f319d22f-3fd5-4377-87a2-13a3a8d2abab\") " pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:24 crc kubenswrapper[4792]: I1127 18:52:24.768822 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f319d22f-3fd5-4377-87a2-13a3a8d2abab-utilities\") pod \"redhat-marketplace-qc7xw\" (UID: \"f319d22f-3fd5-4377-87a2-13a3a8d2abab\") " pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:24 crc kubenswrapper[4792]: I1127 18:52:24.769663 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f319d22f-3fd5-4377-87a2-13a3a8d2abab-utilities\") pod \"redhat-marketplace-qc7xw\" (UID: \"f319d22f-3fd5-4377-87a2-13a3a8d2abab\") " pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:24 crc kubenswrapper[4792]: I1127 18:52:24.769712 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f319d22f-3fd5-4377-87a2-13a3a8d2abab-catalog-content\") pod \"redhat-marketplace-qc7xw\" (UID: \"f319d22f-3fd5-4377-87a2-13a3a8d2abab\") " pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:24 crc kubenswrapper[4792]: I1127 18:52:24.812626 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2m7g\" (UniqueName: \"kubernetes.io/projected/f319d22f-3fd5-4377-87a2-13a3a8d2abab-kube-api-access-n2m7g\") pod \"redhat-marketplace-qc7xw\" (UID: \"f319d22f-3fd5-4377-87a2-13a3a8d2abab\") " pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:24 crc kubenswrapper[4792]: I1127 18:52:24.941674 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:27 crc kubenswrapper[4792]: I1127 18:52:25.518890 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qc7xw"] Nov 27 18:52:27 crc kubenswrapper[4792]: I1127 18:52:25.886903 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5t7f8" event={"ID":"67ad7e58-369e-46e3-8ea0-7802b46be7d3","Type":"ContainerStarted","Data":"ce5df9e1bb26f53769648882e26a56f791fdc7fa362ab424fdee98a89837a99b"} Nov 27 18:52:27 crc kubenswrapper[4792]: I1127 18:52:25.888892 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc7xw" event={"ID":"f319d22f-3fd5-4377-87a2-13a3a8d2abab","Type":"ContainerStarted","Data":"10e1a140ada46c96b43d6f0f6b86ae22efabcd81e40c694d5ead7deb356934d4"} Nov 27 18:52:27 crc kubenswrapper[4792]: I1127 18:52:25.888931 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc7xw" event={"ID":"f319d22f-3fd5-4377-87a2-13a3a8d2abab","Type":"ContainerStarted","Data":"557b7c98148579909970309e2296499385b0a6eb53f861613488879acd41a12e"} Nov 27 18:52:27 crc kubenswrapper[4792]: I1127 18:52:26.899853 4792 generic.go:334] "Generic (PLEG): container finished" podID="f319d22f-3fd5-4377-87a2-13a3a8d2abab" containerID="10e1a140ada46c96b43d6f0f6b86ae22efabcd81e40c694d5ead7deb356934d4" exitCode=0 Nov 27 18:52:27 crc kubenswrapper[4792]: I1127 18:52:26.899923 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc7xw" event={"ID":"f319d22f-3fd5-4377-87a2-13a3a8d2abab","Type":"ContainerDied","Data":"10e1a140ada46c96b43d6f0f6b86ae22efabcd81e40c694d5ead7deb356934d4"} Nov 27 18:52:28 crc kubenswrapper[4792]: I1127 18:52:28.937461 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc7xw" event={"ID":"f319d22f-3fd5-4377-87a2-13a3a8d2abab","Type":"ContainerStarted","Data":"17b779827e80c8373408195b7c6372e0f941f1db48393369c80928f0286e09c2"} Nov 27 18:52:30 crc kubenswrapper[4792]: I1127 18:52:30.964707 4792 generic.go:334] "Generic (PLEG): container finished" podID="f319d22f-3fd5-4377-87a2-13a3a8d2abab" containerID="17b779827e80c8373408195b7c6372e0f941f1db48393369c80928f0286e09c2" exitCode=0 Nov 27 18:52:30 crc kubenswrapper[4792]: I1127 18:52:30.964793 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc7xw" event={"ID":"f319d22f-3fd5-4377-87a2-13a3a8d2abab","Type":"ContainerDied","Data":"17b779827e80c8373408195b7c6372e0f941f1db48393369c80928f0286e09c2"} Nov 27 18:52:30 crc kubenswrapper[4792]: I1127 18:52:30.967669 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5t7f8" event={"ID":"67ad7e58-369e-46e3-8ea0-7802b46be7d3","Type":"ContainerDied","Data":"ce5df9e1bb26f53769648882e26a56f791fdc7fa362ab424fdee98a89837a99b"} Nov 27 18:52:30 crc kubenswrapper[4792]: I1127 18:52:30.967630 4792 generic.go:334] "Generic (PLEG): container finished" podID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" containerID="ce5df9e1bb26f53769648882e26a56f791fdc7fa362ab424fdee98a89837a99b" exitCode=0 Nov 27 18:52:31 crc kubenswrapper[4792]: I1127 18:52:31.980913 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5t7f8" event={"ID":"67ad7e58-369e-46e3-8ea0-7802b46be7d3","Type":"ContainerStarted","Data":"e31e87f18dfe9546b6ca7d4a55af0cbab4b8939a955be82a55a3278a1129975f"} Nov 27 18:52:31 crc kubenswrapper[4792]: I1127 18:52:31.984559 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc7xw" event={"ID":"f319d22f-3fd5-4377-87a2-13a3a8d2abab","Type":"ContainerStarted","Data":"bf646aa472fdcb48b69ca3b1fa1aea69d62518446298d90a4b092223e29830e8"} Nov 27 18:52:31 crc kubenswrapper[4792]: I1127 18:52:31.999042 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5t7f8" podStartSLOduration=3.38586506 podStartE2EDuration="10.999025441s" podCreationTimestamp="2025-11-27 18:52:21 +0000 UTC" firstStartedPulling="2025-11-27 18:52:23.843372544 +0000 UTC m=+6166.186198862" lastFinishedPulling="2025-11-27 18:52:31.456532915 +0000 UTC m=+6173.799359243" observedRunningTime="2025-11-27 18:52:31.995010551 +0000 UTC m=+6174.337836869" watchObservedRunningTime="2025-11-27 18:52:31.999025441 +0000 UTC m=+6174.341851759" Nov 27 18:52:32 crc kubenswrapper[4792]: I1127 18:52:32.024995 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qc7xw" podStartSLOduration=3.364081361 podStartE2EDuration="8.024976076s" podCreationTimestamp="2025-11-27 18:52:24 +0000 UTC" firstStartedPulling="2025-11-27 18:52:26.902031915 +0000 UTC m=+6169.244858233" lastFinishedPulling="2025-11-27 18:52:31.56292663 +0000 UTC m=+6173.905752948" observedRunningTime="2025-11-27 18:52:32.014406933 +0000 UTC m=+6174.357233251" watchObservedRunningTime="2025-11-27 18:52:32.024976076 +0000 UTC m=+6174.367802394" Nov 27 18:52:32 crc kubenswrapper[4792]: I1127 18:52:32.329738 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:52:32 crc kubenswrapper[4792]: I1127 18:52:32.329785 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:52:33 crc kubenswrapper[4792]: I1127 18:52:33.386299 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5t7f8" podUID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" containerName="registry-server" probeResult="failure" output=< Nov 27 18:52:33 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:52:33 crc kubenswrapper[4792]: > Nov 27 18:52:34 crc kubenswrapper[4792]: I1127 18:52:34.443956 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-cx4g9_e0f375d2-c00a-49a3-963d-5d2bb71fa625/control-plane-machine-set-operator/0.log" Nov 27 18:52:34 crc kubenswrapper[4792]: I1127 18:52:34.674589 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6lwpq_7bbb7ab5-a68c-402c-99d8-9cd47c361ccd/kube-rbac-proxy/0.log" Nov 27 18:52:34 crc kubenswrapper[4792]: I1127 18:52:34.719502 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6lwpq_7bbb7ab5-a68c-402c-99d8-9cd47c361ccd/machine-api-operator/0.log" Nov 27 18:52:34 crc kubenswrapper[4792]: I1127 18:52:34.943011 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:34 crc kubenswrapper[4792]: I1127 18:52:34.943068 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:35 crc kubenswrapper[4792]: I1127 18:52:35.997324 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-qc7xw" podUID="f319d22f-3fd5-4377-87a2-13a3a8d2abab" containerName="registry-server" probeResult="failure" output=< Nov 27 18:52:35 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:52:35 crc kubenswrapper[4792]: > Nov 27 18:52:38 crc kubenswrapper[4792]: I1127 18:52:38.289872 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:52:38 crc kubenswrapper[4792]: I1127 18:52:38.290174 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:52:38 crc kubenswrapper[4792]: I1127 18:52:38.290227 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 18:52:38 crc kubenswrapper[4792]: I1127 18:52:38.291151 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00e5eb1067167c84c4609eaef8fcbbad526cbcc37291f40611bc937ef3d3b277"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 18:52:38 crc kubenswrapper[4792]: I1127 18:52:38.291197 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://00e5eb1067167c84c4609eaef8fcbbad526cbcc37291f40611bc937ef3d3b277" gracePeriod=600 Nov 27 18:52:39 crc kubenswrapper[4792]: I1127 18:52:39.061113 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="00e5eb1067167c84c4609eaef8fcbbad526cbcc37291f40611bc937ef3d3b277" exitCode=0 Nov 27 18:52:39 crc kubenswrapper[4792]: I1127 18:52:39.061208 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"00e5eb1067167c84c4609eaef8fcbbad526cbcc37291f40611bc937ef3d3b277"} Nov 27 18:52:39 crc kubenswrapper[4792]: I1127 18:52:39.061723 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad"} Nov 27 18:52:39 crc kubenswrapper[4792]: I1127 18:52:39.061747 4792 scope.go:117] "RemoveContainer" containerID="95df5bb8f3f78665959e39d29fe76c8d5b7b9504145ca3ff686e7dc8d4803da6" Nov 27 18:52:43 crc kubenswrapper[4792]: I1127 18:52:43.385072 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5t7f8" podUID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" containerName="registry-server" probeResult="failure" output=< Nov 27 18:52:43 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:52:43 crc kubenswrapper[4792]: > Nov 27 18:52:44 crc kubenswrapper[4792]: I1127 18:52:44.994114 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:45 crc kubenswrapper[4792]: I1127 18:52:45.049137 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:45 crc kubenswrapper[4792]: I1127 18:52:45.236956 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qc7xw"] Nov 27 18:52:46 crc kubenswrapper[4792]: I1127 18:52:46.129228 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qc7xw" podUID="f319d22f-3fd5-4377-87a2-13a3a8d2abab" containerName="registry-server" containerID="cri-o://bf646aa472fdcb48b69ca3b1fa1aea69d62518446298d90a4b092223e29830e8" gracePeriod=2 Nov 27 18:52:46 crc kubenswrapper[4792]: I1127 18:52:46.724807 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:46 crc kubenswrapper[4792]: I1127 18:52:46.813794 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f319d22f-3fd5-4377-87a2-13a3a8d2abab-utilities\") pod \"f319d22f-3fd5-4377-87a2-13a3a8d2abab\" (UID: \"f319d22f-3fd5-4377-87a2-13a3a8d2abab\") " Nov 27 18:52:46 crc kubenswrapper[4792]: I1127 18:52:46.814077 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2m7g\" (UniqueName: \"kubernetes.io/projected/f319d22f-3fd5-4377-87a2-13a3a8d2abab-kube-api-access-n2m7g\") pod \"f319d22f-3fd5-4377-87a2-13a3a8d2abab\" (UID: \"f319d22f-3fd5-4377-87a2-13a3a8d2abab\") " Nov 27 18:52:46 crc kubenswrapper[4792]: I1127 18:52:46.814162 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f319d22f-3fd5-4377-87a2-13a3a8d2abab-catalog-content\") pod \"f319d22f-3fd5-4377-87a2-13a3a8d2abab\" (UID: \"f319d22f-3fd5-4377-87a2-13a3a8d2abab\") " Nov 27 18:52:46 crc kubenswrapper[4792]: I1127 18:52:46.816998 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f319d22f-3fd5-4377-87a2-13a3a8d2abab-utilities" (OuterVolumeSpecName: "utilities") pod "f319d22f-3fd5-4377-87a2-13a3a8d2abab" (UID: "f319d22f-3fd5-4377-87a2-13a3a8d2abab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:52:46 crc kubenswrapper[4792]: I1127 18:52:46.822023 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f319d22f-3fd5-4377-87a2-13a3a8d2abab-kube-api-access-n2m7g" (OuterVolumeSpecName: "kube-api-access-n2m7g") pod "f319d22f-3fd5-4377-87a2-13a3a8d2abab" (UID: "f319d22f-3fd5-4377-87a2-13a3a8d2abab"). InnerVolumeSpecName "kube-api-access-n2m7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:52:46 crc kubenswrapper[4792]: I1127 18:52:46.834148 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f319d22f-3fd5-4377-87a2-13a3a8d2abab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f319d22f-3fd5-4377-87a2-13a3a8d2abab" (UID: "f319d22f-3fd5-4377-87a2-13a3a8d2abab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:52:46 crc kubenswrapper[4792]: I1127 18:52:46.917246 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f319d22f-3fd5-4377-87a2-13a3a8d2abab-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:52:46 crc kubenswrapper[4792]: I1127 18:52:46.917293 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2m7g\" (UniqueName: \"kubernetes.io/projected/f319d22f-3fd5-4377-87a2-13a3a8d2abab-kube-api-access-n2m7g\") on node \"crc\" DevicePath \"\"" Nov 27 18:52:46 crc kubenswrapper[4792]: I1127 18:52:46.917310 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f319d22f-3fd5-4377-87a2-13a3a8d2abab-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:52:47 crc kubenswrapper[4792]: I1127 18:52:47.144420 4792 generic.go:334] "Generic (PLEG): container finished" podID="f319d22f-3fd5-4377-87a2-13a3a8d2abab" containerID="bf646aa472fdcb48b69ca3b1fa1aea69d62518446298d90a4b092223e29830e8" exitCode=0 Nov 27 18:52:47 crc kubenswrapper[4792]: I1127 18:52:47.144500 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc7xw" event={"ID":"f319d22f-3fd5-4377-87a2-13a3a8d2abab","Type":"ContainerDied","Data":"bf646aa472fdcb48b69ca3b1fa1aea69d62518446298d90a4b092223e29830e8"} Nov 27 18:52:47 crc kubenswrapper[4792]: I1127 18:52:47.144564 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc7xw" event={"ID":"f319d22f-3fd5-4377-87a2-13a3a8d2abab","Type":"ContainerDied","Data":"557b7c98148579909970309e2296499385b0a6eb53f861613488879acd41a12e"} Nov 27 18:52:47 crc kubenswrapper[4792]: I1127 18:52:47.144595 4792 scope.go:117] "RemoveContainer" containerID="bf646aa472fdcb48b69ca3b1fa1aea69d62518446298d90a4b092223e29830e8" Nov 27 18:52:47 crc kubenswrapper[4792]: I1127 18:52:47.144509 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qc7xw" Nov 27 18:52:47 crc kubenswrapper[4792]: I1127 18:52:47.182072 4792 scope.go:117] "RemoveContainer" containerID="17b779827e80c8373408195b7c6372e0f941f1db48393369c80928f0286e09c2" Nov 27 18:52:47 crc kubenswrapper[4792]: I1127 18:52:47.184372 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qc7xw"] Nov 27 18:52:47 crc kubenswrapper[4792]: I1127 18:52:47.193792 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qc7xw"] Nov 27 18:52:47 crc kubenswrapper[4792]: I1127 18:52:47.209368 4792 scope.go:117] "RemoveContainer" containerID="10e1a140ada46c96b43d6f0f6b86ae22efabcd81e40c694d5ead7deb356934d4" Nov 27 18:52:47 crc kubenswrapper[4792]: I1127 18:52:47.276168 4792 scope.go:117] "RemoveContainer" containerID="bf646aa472fdcb48b69ca3b1fa1aea69d62518446298d90a4b092223e29830e8" Nov 27 18:52:47 crc kubenswrapper[4792]: E1127 18:52:47.278217 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf646aa472fdcb48b69ca3b1fa1aea69d62518446298d90a4b092223e29830e8\": container with ID starting with bf646aa472fdcb48b69ca3b1fa1aea69d62518446298d90a4b092223e29830e8 not found: ID does not exist" containerID="bf646aa472fdcb48b69ca3b1fa1aea69d62518446298d90a4b092223e29830e8" Nov 27 18:52:47 crc kubenswrapper[4792]: I1127 18:52:47.278261 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf646aa472fdcb48b69ca3b1fa1aea69d62518446298d90a4b092223e29830e8"} err="failed to get container status \"bf646aa472fdcb48b69ca3b1fa1aea69d62518446298d90a4b092223e29830e8\": rpc error: code = NotFound desc = could not find container \"bf646aa472fdcb48b69ca3b1fa1aea69d62518446298d90a4b092223e29830e8\": container with ID starting with bf646aa472fdcb48b69ca3b1fa1aea69d62518446298d90a4b092223e29830e8 not found: ID does not exist" Nov 27 18:52:47 crc kubenswrapper[4792]: I1127 18:52:47.278288 4792 scope.go:117] "RemoveContainer" containerID="17b779827e80c8373408195b7c6372e0f941f1db48393369c80928f0286e09c2" Nov 27 18:52:47 crc kubenswrapper[4792]: E1127 18:52:47.278715 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17b779827e80c8373408195b7c6372e0f941f1db48393369c80928f0286e09c2\": container with ID starting with 17b779827e80c8373408195b7c6372e0f941f1db48393369c80928f0286e09c2 not found: ID does not exist" containerID="17b779827e80c8373408195b7c6372e0f941f1db48393369c80928f0286e09c2" Nov 27 18:52:47 crc kubenswrapper[4792]: I1127 18:52:47.278748 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b779827e80c8373408195b7c6372e0f941f1db48393369c80928f0286e09c2"} err="failed to get container status \"17b779827e80c8373408195b7c6372e0f941f1db48393369c80928f0286e09c2\": rpc error: code = NotFound desc = could not find container \"17b779827e80c8373408195b7c6372e0f941f1db48393369c80928f0286e09c2\": container with ID starting with 17b779827e80c8373408195b7c6372e0f941f1db48393369c80928f0286e09c2 not found: ID does not exist" Nov 27 18:52:47 crc kubenswrapper[4792]: I1127 18:52:47.278767 4792 scope.go:117] "RemoveContainer" containerID="10e1a140ada46c96b43d6f0f6b86ae22efabcd81e40c694d5ead7deb356934d4" Nov 27 18:52:47 crc kubenswrapper[4792]: E1127 18:52:47.278999 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10e1a140ada46c96b43d6f0f6b86ae22efabcd81e40c694d5ead7deb356934d4\": container with ID starting with 10e1a140ada46c96b43d6f0f6b86ae22efabcd81e40c694d5ead7deb356934d4 not found: ID does not exist" containerID="10e1a140ada46c96b43d6f0f6b86ae22efabcd81e40c694d5ead7deb356934d4" Nov 27 18:52:47 crc kubenswrapper[4792]: I1127 18:52:47.279021 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e1a140ada46c96b43d6f0f6b86ae22efabcd81e40c694d5ead7deb356934d4"} err="failed to get container status \"10e1a140ada46c96b43d6f0f6b86ae22efabcd81e40c694d5ead7deb356934d4\": rpc error: code = NotFound desc = could not find container \"10e1a140ada46c96b43d6f0f6b86ae22efabcd81e40c694d5ead7deb356934d4\": container with ID starting with 10e1a140ada46c96b43d6f0f6b86ae22efabcd81e40c694d5ead7deb356934d4 not found: ID does not exist" Nov 27 18:52:48 crc kubenswrapper[4792]: I1127 18:52:48.711028 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f319d22f-3fd5-4377-87a2-13a3a8d2abab" path="/var/lib/kubelet/pods/f319d22f-3fd5-4377-87a2-13a3a8d2abab/volumes" Nov 27 18:52:49 crc kubenswrapper[4792]: I1127 18:52:49.209311 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-r7tdl_d6ba8d56-25bc-4fa4-bbeb-8412cf566d8f/cert-manager-controller/0.log" Nov 27 18:52:49 crc kubenswrapper[4792]: I1127 18:52:49.396015 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-zk24k_ef16aa33-7753-41e0-b78f-533ea2f2dd76/cert-manager-cainjector/0.log" Nov 27 18:52:49 crc kubenswrapper[4792]: I1127 18:52:49.469307 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-l4gfl_3d4b582d-f8e6-477c-be1e-36f53bbc52e5/cert-manager-webhook/0.log" Nov 27 18:52:53 crc kubenswrapper[4792]: I1127 18:52:53.381005 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5t7f8" podUID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" containerName="registry-server" probeResult="failure" output=< Nov 27 18:52:53 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:52:53 crc kubenswrapper[4792]: > Nov 27 18:53:03 crc kubenswrapper[4792]: I1127 18:53:03.156874 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-th928_5131ddcc-b3d4-4df4-9474-19896fb63573/nmstate-console-plugin/0.log" Nov 27 18:53:03 crc kubenswrapper[4792]: I1127 18:53:03.386017 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5t7f8" podUID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" containerName="registry-server" probeResult="failure" output=< Nov 27 18:53:03 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 18:53:03 crc kubenswrapper[4792]: > Nov 27 18:53:03 crc kubenswrapper[4792]: I1127 18:53:03.455536 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7z5qw_074e28a6-e1f1-43d3-b34a-b2d8c143f8af/nmstate-handler/0.log" Nov 27 18:53:03 crc kubenswrapper[4792]: I1127 18:53:03.494788 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-r6wgp_a21d5243-150d-488b-9cf2-ab95ee2732e6/kube-rbac-proxy/0.log" Nov 27 18:53:03 crc kubenswrapper[4792]: I1127 18:53:03.526694 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-r6wgp_a21d5243-150d-488b-9cf2-ab95ee2732e6/nmstate-metrics/0.log" Nov 27 18:53:03 crc kubenswrapper[4792]: I1127 18:53:03.746991 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-lmhfs_da86e440-8b68-4f21-bc7b-5cc71334ce5a/nmstate-webhook/0.log" Nov 27 18:53:03 crc kubenswrapper[4792]: I1127 18:53:03.764226 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-j62cc_ded38fca-6b87-471c-ac68-423a6963dca6/nmstate-operator/0.log" Nov 27 18:53:12 crc kubenswrapper[4792]: I1127 18:53:12.379639 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:53:12 crc kubenswrapper[4792]: I1127 18:53:12.439473 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:53:12 crc kubenswrapper[4792]: I1127 18:53:12.626944 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5t7f8"] Nov 27 18:53:13 crc kubenswrapper[4792]: I1127 18:53:13.422097 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5t7f8" podUID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" containerName="registry-server" containerID="cri-o://e31e87f18dfe9546b6ca7d4a55af0cbab4b8939a955be82a55a3278a1129975f" gracePeriod=2 Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.130629 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.283503 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwqbt\" (UniqueName: \"kubernetes.io/projected/67ad7e58-369e-46e3-8ea0-7802b46be7d3-kube-api-access-cwqbt\") pod \"67ad7e58-369e-46e3-8ea0-7802b46be7d3\" (UID: \"67ad7e58-369e-46e3-8ea0-7802b46be7d3\") " Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.283720 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ad7e58-369e-46e3-8ea0-7802b46be7d3-utilities\") pod \"67ad7e58-369e-46e3-8ea0-7802b46be7d3\" (UID: \"67ad7e58-369e-46e3-8ea0-7802b46be7d3\") " Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.283758 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ad7e58-369e-46e3-8ea0-7802b46be7d3-catalog-content\") pod \"67ad7e58-369e-46e3-8ea0-7802b46be7d3\" (UID: \"67ad7e58-369e-46e3-8ea0-7802b46be7d3\") " Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.286805 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ad7e58-369e-46e3-8ea0-7802b46be7d3-utilities" (OuterVolumeSpecName: "utilities") pod "67ad7e58-369e-46e3-8ea0-7802b46be7d3" (UID: "67ad7e58-369e-46e3-8ea0-7802b46be7d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.294280 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ad7e58-369e-46e3-8ea0-7802b46be7d3-kube-api-access-cwqbt" (OuterVolumeSpecName: "kube-api-access-cwqbt") pod "67ad7e58-369e-46e3-8ea0-7802b46be7d3" (UID: "67ad7e58-369e-46e3-8ea0-7802b46be7d3"). InnerVolumeSpecName "kube-api-access-cwqbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.388364 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwqbt\" (UniqueName: \"kubernetes.io/projected/67ad7e58-369e-46e3-8ea0-7802b46be7d3-kube-api-access-cwqbt\") on node \"crc\" DevicePath \"\"" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.388428 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ad7e58-369e-46e3-8ea0-7802b46be7d3-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.428471 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ad7e58-369e-46e3-8ea0-7802b46be7d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67ad7e58-369e-46e3-8ea0-7802b46be7d3" (UID: "67ad7e58-369e-46e3-8ea0-7802b46be7d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.440327 4792 generic.go:334] "Generic (PLEG): container finished" podID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" containerID="e31e87f18dfe9546b6ca7d4a55af0cbab4b8939a955be82a55a3278a1129975f" exitCode=0 Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.440376 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5t7f8" event={"ID":"67ad7e58-369e-46e3-8ea0-7802b46be7d3","Type":"ContainerDied","Data":"e31e87f18dfe9546b6ca7d4a55af0cbab4b8939a955be82a55a3278a1129975f"} Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.440407 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5t7f8" event={"ID":"67ad7e58-369e-46e3-8ea0-7802b46be7d3","Type":"ContainerDied","Data":"98946d0da4b25a2fda88667be83204868759f00c4f64cb5fc66b55dc17e9a5ca"} Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.440430 4792 scope.go:117] "RemoveContainer" containerID="e31e87f18dfe9546b6ca7d4a55af0cbab4b8939a955be82a55a3278a1129975f" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.440603 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5t7f8" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.479507 4792 scope.go:117] "RemoveContainer" containerID="ce5df9e1bb26f53769648882e26a56f791fdc7fa362ab424fdee98a89837a99b" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.485584 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5t7f8"] Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.492037 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ad7e58-369e-46e3-8ea0-7802b46be7d3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.497071 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5t7f8"] Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.515984 4792 scope.go:117] "RemoveContainer" containerID="db595747813bd9f9aa90408843d422a537d991fcc4a1ebea7d2bc3d3aa0170f5" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.575255 4792 scope.go:117] "RemoveContainer" containerID="e31e87f18dfe9546b6ca7d4a55af0cbab4b8939a955be82a55a3278a1129975f" Nov 27 18:53:14 crc kubenswrapper[4792]: E1127 18:53:14.575689 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e31e87f18dfe9546b6ca7d4a55af0cbab4b8939a955be82a55a3278a1129975f\": container with ID starting with e31e87f18dfe9546b6ca7d4a55af0cbab4b8939a955be82a55a3278a1129975f not found: ID does not exist" containerID="e31e87f18dfe9546b6ca7d4a55af0cbab4b8939a955be82a55a3278a1129975f" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.575730 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e31e87f18dfe9546b6ca7d4a55af0cbab4b8939a955be82a55a3278a1129975f"} err="failed to get container status \"e31e87f18dfe9546b6ca7d4a55af0cbab4b8939a955be82a55a3278a1129975f\": rpc error: code = NotFound desc = could not find container \"e31e87f18dfe9546b6ca7d4a55af0cbab4b8939a955be82a55a3278a1129975f\": container with ID starting with e31e87f18dfe9546b6ca7d4a55af0cbab4b8939a955be82a55a3278a1129975f not found: ID does not exist" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.575771 4792 scope.go:117] "RemoveContainer" containerID="ce5df9e1bb26f53769648882e26a56f791fdc7fa362ab424fdee98a89837a99b" Nov 27 18:53:14 crc kubenswrapper[4792]: E1127 18:53:14.576168 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce5df9e1bb26f53769648882e26a56f791fdc7fa362ab424fdee98a89837a99b\": container with ID starting with ce5df9e1bb26f53769648882e26a56f791fdc7fa362ab424fdee98a89837a99b not found: ID does not exist" containerID="ce5df9e1bb26f53769648882e26a56f791fdc7fa362ab424fdee98a89837a99b" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.576195 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5df9e1bb26f53769648882e26a56f791fdc7fa362ab424fdee98a89837a99b"} err="failed to get container status \"ce5df9e1bb26f53769648882e26a56f791fdc7fa362ab424fdee98a89837a99b\": rpc error: code = NotFound desc = could not find container \"ce5df9e1bb26f53769648882e26a56f791fdc7fa362ab424fdee98a89837a99b\": container with ID starting with ce5df9e1bb26f53769648882e26a56f791fdc7fa362ab424fdee98a89837a99b not found: ID does not exist" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.576214 4792 scope.go:117] "RemoveContainer" containerID="db595747813bd9f9aa90408843d422a537d991fcc4a1ebea7d2bc3d3aa0170f5" Nov 27 18:53:14 crc kubenswrapper[4792]: E1127 18:53:14.576611 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db595747813bd9f9aa90408843d422a537d991fcc4a1ebea7d2bc3d3aa0170f5\": container with ID starting with db595747813bd9f9aa90408843d422a537d991fcc4a1ebea7d2bc3d3aa0170f5 not found: ID does not exist" containerID="db595747813bd9f9aa90408843d422a537d991fcc4a1ebea7d2bc3d3aa0170f5" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.576670 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db595747813bd9f9aa90408843d422a537d991fcc4a1ebea7d2bc3d3aa0170f5"} err="failed to get container status \"db595747813bd9f9aa90408843d422a537d991fcc4a1ebea7d2bc3d3aa0170f5\": rpc error: code = NotFound desc = could not find container \"db595747813bd9f9aa90408843d422a537d991fcc4a1ebea7d2bc3d3aa0170f5\": container with ID starting with db595747813bd9f9aa90408843d422a537d991fcc4a1ebea7d2bc3d3aa0170f5 not found: ID does not exist" Nov 27 18:53:14 crc kubenswrapper[4792]: I1127 18:53:14.721070 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" path="/var/lib/kubelet/pods/67ad7e58-369e-46e3-8ea0-7802b46be7d3/volumes" Nov 27 18:53:17 crc kubenswrapper[4792]: I1127 18:53:17.349080 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5994f6989f-4s6cj_01e17fe3-0b99-4719-8a19-bdb45dabeaac/manager/0.log" Nov 27 18:53:17 crc kubenswrapper[4792]: I1127 18:53:17.378281 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5994f6989f-4s6cj_01e17fe3-0b99-4719-8a19-bdb45dabeaac/kube-rbac-proxy/0.log" Nov 27 18:53:31 crc kubenswrapper[4792]: I1127 18:53:31.942190 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-g2l98_286d6b8b-ff31-4c5c-84cf-9ec7bdece2a0/cluster-logging-operator/0.log" Nov 27 18:53:32 crc kubenswrapper[4792]: I1127 18:53:32.142273 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-kv4ll_b495d78f-2e10-4171-88ba-2ddb90195710/collector/0.log" Nov 27 18:53:32 crc kubenswrapper[4792]: I1127 18:53:32.144278 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_6a9851c2-362b-425e-adf3-5056cbbfb169/loki-compactor/0.log" Nov 27 18:53:32 crc kubenswrapper[4792]: I1127 18:53:32.353476 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-dgnv7_b883f630-7c31-4a1a-9633-8770b40c5a69/loki-distributor/0.log" Nov 27 18:53:32 crc kubenswrapper[4792]: I1127 18:53:32.433005 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767db5f6c6-qqqzr_5e5bb18c-7c60-4ec3-ac94-e33904750bb8/gateway/0.log" Nov 27 18:53:32 crc kubenswrapper[4792]: I1127 18:53:32.458278 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767db5f6c6-qqqzr_5e5bb18c-7c60-4ec3-ac94-e33904750bb8/opa/0.log" Nov 27 18:53:32 crc kubenswrapper[4792]: I1127 18:53:32.536400 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767db5f6c6-tqz78_8435b802-65cf-46a0-89fa-fa55e43dfb68/gateway/0.log" Nov 27 18:53:32 crc kubenswrapper[4792]: I1127 18:53:32.641852 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767db5f6c6-tqz78_8435b802-65cf-46a0-89fa-fa55e43dfb68/opa/0.log" Nov 27 18:53:32 crc kubenswrapper[4792]: I1127 18:53:32.752993 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_3b4c2851-5058-4cfc-9efa-a5d94e7e8090/loki-index-gateway/0.log" Nov 27 18:53:32 crc kubenswrapper[4792]: I1127 18:53:32.971983 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_2a318073-842f-45ff-b6df-bc0abc0d576b/loki-ingester/0.log" Nov 27 18:53:32 crc kubenswrapper[4792]: I1127 18:53:32.986754 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-9rscz_8f70d890-772f-49eb-9c3b-0553bc2349ca/loki-querier/0.log" Nov 27 18:53:33 crc kubenswrapper[4792]: I1127 18:53:33.169249 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-l69ln_1828379a-4323-4161-881c-cf67367db9d4/loki-query-frontend/0.log" Nov 27 18:53:46 crc kubenswrapper[4792]: I1127 18:53:46.890767 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-b2ds9_0056c3c2-a1e5-4733-a428-fd3b91475472/kube-rbac-proxy/0.log" Nov 27 18:53:47 crc kubenswrapper[4792]: I1127 18:53:47.040545 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-b2ds9_0056c3c2-a1e5-4733-a428-fd3b91475472/controller/0.log" Nov 27 18:53:47 crc kubenswrapper[4792]: I1127 18:53:47.103242 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-frr-files/0.log" Nov 27 18:53:47 crc kubenswrapper[4792]: I1127 18:53:47.543282 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-metrics/0.log" Nov 27 18:53:47 crc kubenswrapper[4792]: I1127 18:53:47.568965 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-reloader/0.log" Nov 27 18:53:47 crc kubenswrapper[4792]: I1127 18:53:47.576760 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-frr-files/0.log" Nov 27 18:53:47 crc kubenswrapper[4792]: I1127 18:53:47.596774 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-reloader/0.log" Nov 27 18:53:47 crc kubenswrapper[4792]: I1127 18:53:47.785009 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-reloader/0.log" Nov 27 18:53:47 crc kubenswrapper[4792]: I1127 18:53:47.791634 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-frr-files/0.log" Nov 27 18:53:47 crc kubenswrapper[4792]: I1127 18:53:47.808132 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-metrics/0.log" Nov 27 18:53:47 crc kubenswrapper[4792]: I1127 18:53:47.828854 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-metrics/0.log" Nov 27 18:53:48 crc kubenswrapper[4792]: I1127 18:53:48.016702 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-metrics/0.log" Nov 27 18:53:48 crc kubenswrapper[4792]: I1127 18:53:48.038911 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-frr-files/0.log" Nov 27 18:53:48 crc kubenswrapper[4792]: I1127 18:53:48.053269 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/controller/0.log" Nov 27 18:53:48 crc kubenswrapper[4792]: I1127 18:53:48.062425 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-reloader/0.log" Nov 27 18:53:48 crc kubenswrapper[4792]: I1127 18:53:48.303195 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/kube-rbac-proxy/0.log" Nov 27 18:53:48 crc kubenswrapper[4792]: I1127 18:53:48.309883 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/kube-rbac-proxy-frr/0.log" Nov 27 18:53:48 crc kubenswrapper[4792]: I1127 18:53:48.312130 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/frr-metrics/0.log" Nov 27 18:53:48 crc kubenswrapper[4792]: I1127 18:53:48.519346 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/reloader/0.log" Nov 27 18:53:48 crc kubenswrapper[4792]: I1127 18:53:48.526068 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-k5t27_1cebbd73-ff6c-46b4-8b96-da44b744dc66/frr-k8s-webhook-server/0.log" Nov 27 18:53:48 crc kubenswrapper[4792]: I1127 18:53:48.733803 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-fbff999dd-d7fwq_4de98138-86e1-4a92-84ff-4ef1a2a1d57b/manager/0.log" Nov 27 18:53:48 crc kubenswrapper[4792]: I1127 18:53:48.984247 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7844df848f-mmmmh_25e66971-1039-45a3-9010-17efb7f2dbf6/webhook-server/0.log" Nov 27 18:53:49 crc kubenswrapper[4792]: I1127 18:53:49.082878 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rptqb_ee78f3b0-9199-41a2-ad7a-64e175706386/kube-rbac-proxy/0.log" Nov 27 18:53:49 crc kubenswrapper[4792]: I1127 18:53:49.794700 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rptqb_ee78f3b0-9199-41a2-ad7a-64e175706386/speaker/0.log" Nov 27 18:53:50 crc kubenswrapper[4792]: I1127 18:53:50.004732 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/frr/0.log" Nov 27 18:54:03 crc kubenswrapper[4792]: I1127 18:54:03.161782 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k_e6d145eb-d07b-4d67-955e-4a85351799d3/util/0.log" Nov 27 18:54:03 crc kubenswrapper[4792]: I1127 18:54:03.462257 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k_e6d145eb-d07b-4d67-955e-4a85351799d3/util/0.log" Nov 27 18:54:03 crc kubenswrapper[4792]: I1127 18:54:03.493616 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k_e6d145eb-d07b-4d67-955e-4a85351799d3/pull/0.log" Nov 27 18:54:03 crc kubenswrapper[4792]: I1127 18:54:03.493754 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k_e6d145eb-d07b-4d67-955e-4a85351799d3/pull/0.log" Nov 27 18:54:03 crc kubenswrapper[4792]: I1127 18:54:03.708466 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k_e6d145eb-d07b-4d67-955e-4a85351799d3/extract/0.log" Nov 27 18:54:03 crc kubenswrapper[4792]: I1127 18:54:03.711699 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k_e6d145eb-d07b-4d67-955e-4a85351799d3/util/0.log" Nov 27 18:54:03 crc kubenswrapper[4792]: I1127 18:54:03.736961 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k_e6d145eb-d07b-4d67-955e-4a85351799d3/pull/0.log" Nov 27 18:54:03 crc kubenswrapper[4792]: I1127 18:54:03.935395 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8_702456d4-256d-4792-bfb7-0389c6aa9726/util/0.log" Nov 27 18:54:04 crc kubenswrapper[4792]: I1127 18:54:04.154710 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8_702456d4-256d-4792-bfb7-0389c6aa9726/util/0.log" Nov 27 18:54:04 crc kubenswrapper[4792]: I1127 18:54:04.156005 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8_702456d4-256d-4792-bfb7-0389c6aa9726/pull/0.log" Nov 27 18:54:04 crc kubenswrapper[4792]: I1127 18:54:04.156064 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8_702456d4-256d-4792-bfb7-0389c6aa9726/pull/0.log" Nov 27 18:54:04 crc kubenswrapper[4792]: I1127 18:54:04.349351 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8_702456d4-256d-4792-bfb7-0389c6aa9726/util/0.log" Nov 27 18:54:04 crc kubenswrapper[4792]: I1127 18:54:04.467629 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8_702456d4-256d-4792-bfb7-0389c6aa9726/extract/0.log" Nov 27 18:54:04 crc kubenswrapper[4792]: I1127 18:54:04.475286 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8_702456d4-256d-4792-bfb7-0389c6aa9726/pull/0.log" Nov 27 18:54:04 crc kubenswrapper[4792]: I1127 18:54:04.615432 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4_6a7fb352-ca11-42ed-9d3d-296e3747292f/util/0.log" Nov 27 18:54:04 crc kubenswrapper[4792]: I1127 18:54:04.794207 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4_6a7fb352-ca11-42ed-9d3d-296e3747292f/pull/0.log" Nov 27 18:54:04 crc kubenswrapper[4792]: I1127 18:54:04.802526 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4_6a7fb352-ca11-42ed-9d3d-296e3747292f/pull/0.log" Nov 27 18:54:04 crc kubenswrapper[4792]: I1127 18:54:04.820701 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4_6a7fb352-ca11-42ed-9d3d-296e3747292f/util/0.log" Nov 27 18:54:05 crc kubenswrapper[4792]: I1127 18:54:05.008741 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4_6a7fb352-ca11-42ed-9d3d-296e3747292f/pull/0.log" Nov 27 18:54:05 crc kubenswrapper[4792]: I1127 18:54:05.039177 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4_6a7fb352-ca11-42ed-9d3d-296e3747292f/util/0.log" Nov 27 18:54:05 crc kubenswrapper[4792]: I1127 18:54:05.054219 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4_6a7fb352-ca11-42ed-9d3d-296e3747292f/extract/0.log" Nov 27 18:54:05 crc kubenswrapper[4792]: I1127 18:54:05.237122 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l_8bf2b066-d894-4413-af14-7e9e03e8d619/util/0.log" Nov 27 18:54:05 crc kubenswrapper[4792]: I1127 18:54:05.426381 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l_8bf2b066-d894-4413-af14-7e9e03e8d619/pull/0.log" Nov 27 18:54:05 crc kubenswrapper[4792]: I1127 18:54:05.451504 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l_8bf2b066-d894-4413-af14-7e9e03e8d619/util/0.log" Nov 27 18:54:05 crc kubenswrapper[4792]: I1127 18:54:05.463201 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l_8bf2b066-d894-4413-af14-7e9e03e8d619/pull/0.log" Nov 27 18:54:05 crc kubenswrapper[4792]: I1127 18:54:05.672313 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l_8bf2b066-d894-4413-af14-7e9e03e8d619/util/0.log" Nov 27 18:54:05 crc kubenswrapper[4792]: I1127 18:54:05.687671 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l_8bf2b066-d894-4413-af14-7e9e03e8d619/pull/0.log" Nov 27 18:54:05 crc kubenswrapper[4792]: I1127 18:54:05.722619 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l_8bf2b066-d894-4413-af14-7e9e03e8d619/extract/0.log" Nov 27 18:54:05 crc kubenswrapper[4792]: I1127 18:54:05.874040 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l_e69faa79-91a5-4146-a048-598f6a9be342/util/0.log" Nov 27 18:54:06 crc kubenswrapper[4792]: I1127 18:54:06.356871 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l_e69faa79-91a5-4146-a048-598f6a9be342/util/0.log" Nov 27 18:54:06 crc kubenswrapper[4792]: I1127 18:54:06.359326 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l_e69faa79-91a5-4146-a048-598f6a9be342/pull/0.log" Nov 27 18:54:06 crc kubenswrapper[4792]: I1127 18:54:06.390160 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l_e69faa79-91a5-4146-a048-598f6a9be342/pull/0.log" Nov 27 18:54:06 crc kubenswrapper[4792]: I1127 18:54:06.577209 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l_e69faa79-91a5-4146-a048-598f6a9be342/util/0.log" Nov 27 18:54:06 crc kubenswrapper[4792]: I1127 18:54:06.622627 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l_e69faa79-91a5-4146-a048-598f6a9be342/extract/0.log" Nov 27 18:54:06 crc kubenswrapper[4792]: I1127 18:54:06.626991 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l_e69faa79-91a5-4146-a048-598f6a9be342/pull/0.log" Nov 27 18:54:06 crc kubenswrapper[4792]: I1127 18:54:06.762589 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t2tmt_fad170db-00fe-471f-b3b3-0201e1b54c21/extract-utilities/0.log" Nov 27 18:54:06 crc kubenswrapper[4792]: I1127 18:54:06.947364 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t2tmt_fad170db-00fe-471f-b3b3-0201e1b54c21/extract-utilities/0.log" Nov 27 18:54:06 crc kubenswrapper[4792]: I1127 18:54:06.960381 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t2tmt_fad170db-00fe-471f-b3b3-0201e1b54c21/extract-content/0.log" Nov 27 18:54:06 crc kubenswrapper[4792]: I1127 18:54:06.978887 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t2tmt_fad170db-00fe-471f-b3b3-0201e1b54c21/extract-content/0.log" Nov 27 18:54:07 crc kubenswrapper[4792]: I1127 18:54:07.195916 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t2tmt_fad170db-00fe-471f-b3b3-0201e1b54c21/extract-utilities/0.log" Nov 27 18:54:07 crc kubenswrapper[4792]: I1127 18:54:07.200388 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t2tmt_fad170db-00fe-471f-b3b3-0201e1b54c21/extract-content/0.log" Nov 27 18:54:07 crc kubenswrapper[4792]: I1127 18:54:07.271998 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhqbd_83213a0e-ad1a-4969-bc77-731e9951f0e9/extract-utilities/0.log" Nov 27 18:54:07 crc kubenswrapper[4792]: I1127 18:54:07.535971 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhqbd_83213a0e-ad1a-4969-bc77-731e9951f0e9/extract-content/0.log" Nov 27 18:54:07 crc kubenswrapper[4792]: I1127 18:54:07.582850 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhqbd_83213a0e-ad1a-4969-bc77-731e9951f0e9/extract-utilities/0.log" Nov 27 18:54:07 crc kubenswrapper[4792]: I1127 18:54:07.598476 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhqbd_83213a0e-ad1a-4969-bc77-731e9951f0e9/extract-content/0.log" Nov 27 18:54:07 crc kubenswrapper[4792]: I1127 18:54:07.782901 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhqbd_83213a0e-ad1a-4969-bc77-731e9951f0e9/extract-content/0.log" Nov 27 18:54:07 crc kubenswrapper[4792]: I1127 18:54:07.819148 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhqbd_83213a0e-ad1a-4969-bc77-731e9951f0e9/extract-utilities/0.log" Nov 27 18:54:08 crc kubenswrapper[4792]: I1127 18:54:08.033336 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wjl66_4cb69df9-1d51-439c-bb3c-c17bd951bde3/marketplace-operator/0.log" Nov 27 18:54:08 crc kubenswrapper[4792]: I1127 18:54:08.181674 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5rv5f_ac78acde-862f-4924-a4a1-59edc00f6ee5/extract-utilities/0.log" Nov 27 18:54:08 crc kubenswrapper[4792]: I1127 18:54:08.348635 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t2tmt_fad170db-00fe-471f-b3b3-0201e1b54c21/registry-server/0.log" Nov 27 18:54:08 crc kubenswrapper[4792]: I1127 18:54:08.528750 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5rv5f_ac78acde-862f-4924-a4a1-59edc00f6ee5/extract-content/0.log" Nov 27 18:54:08 crc kubenswrapper[4792]: I1127 18:54:08.542193 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhqbd_83213a0e-ad1a-4969-bc77-731e9951f0e9/registry-server/0.log" Nov 27 18:54:08 crc kubenswrapper[4792]: I1127 18:54:08.561897 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5rv5f_ac78acde-862f-4924-a4a1-59edc00f6ee5/extract-utilities/0.log" Nov 27 18:54:08 crc kubenswrapper[4792]: I1127 18:54:08.601667 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5rv5f_ac78acde-862f-4924-a4a1-59edc00f6ee5/extract-content/0.log" Nov 27 18:54:08 crc kubenswrapper[4792]: I1127 18:54:08.703529 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5rv5f_ac78acde-862f-4924-a4a1-59edc00f6ee5/extract-utilities/0.log" Nov 27 18:54:08 crc kubenswrapper[4792]: I1127 18:54:08.771052 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5rv5f_ac78acde-862f-4924-a4a1-59edc00f6ee5/extract-content/0.log" Nov 27 18:54:08 crc kubenswrapper[4792]: I1127 18:54:08.841591 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4k6w8_16cfc9e0-e90a-438f-9128-6d59f065695e/extract-utilities/0.log" Nov 27 18:54:08 crc kubenswrapper[4792]: I1127 18:54:08.961444 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5rv5f_ac78acde-862f-4924-a4a1-59edc00f6ee5/registry-server/0.log" Nov 27 18:54:09 crc kubenswrapper[4792]: I1127 18:54:09.011424 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4k6w8_16cfc9e0-e90a-438f-9128-6d59f065695e/extract-content/0.log" Nov 27 18:54:09 crc kubenswrapper[4792]: I1127 18:54:09.046834 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4k6w8_16cfc9e0-e90a-438f-9128-6d59f065695e/extract-utilities/0.log" Nov 27 18:54:09 crc kubenswrapper[4792]: I1127 18:54:09.125207 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4k6w8_16cfc9e0-e90a-438f-9128-6d59f065695e/extract-content/0.log" Nov 27 18:54:09 crc kubenswrapper[4792]: I1127 18:54:09.285790 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4k6w8_16cfc9e0-e90a-438f-9128-6d59f065695e/extract-utilities/0.log" Nov 27 18:54:09 crc kubenswrapper[4792]: I1127 18:54:09.299680 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4k6w8_16cfc9e0-e90a-438f-9128-6d59f065695e/extract-content/0.log" Nov 27 18:54:09 crc kubenswrapper[4792]: I1127 18:54:09.989157 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4k6w8_16cfc9e0-e90a-438f-9128-6d59f065695e/registry-server/0.log" Nov 27 18:54:23 crc kubenswrapper[4792]: I1127 18:54:23.091329 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-xwzqt_f6463a7b-af91-4c4a-b67c-10f17f30becd/prometheus-operator/0.log" Nov 27 18:54:23 crc kubenswrapper[4792]: I1127 18:54:23.228615 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_6be8d975-0d93-42ba-9184-21f36ab98ac9/prometheus-operator-admission-webhook/0.log" Nov 27 18:54:23 crc kubenswrapper[4792]: I1127 18:54:23.241423 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_deae3170-952b-45e4-9527-ce9b37f90359/prometheus-operator-admission-webhook/0.log" Nov 27 18:54:23 crc kubenswrapper[4792]: I1127 18:54:23.685850 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-prnwq_6df6ef32-ac48-4c52-9c23-95926cf8c67d/observability-ui-dashboards/0.log" Nov 27 18:54:23 crc kubenswrapper[4792]: I1127 18:54:23.703596 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-qdcnd_b4f46dd1-954f-497f-b491-a3df62aafda6/operator/0.log" Nov 27 18:54:23 crc kubenswrapper[4792]: I1127 18:54:23.879051 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-pjkg6_44e4a3bf-3593-4c1e-b9cc-4c294ed26692/perses-operator/0.log" Nov 27 18:54:37 crc kubenswrapper[4792]: I1127 18:54:37.248890 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5994f6989f-4s6cj_01e17fe3-0b99-4719-8a19-bdb45dabeaac/kube-rbac-proxy/0.log" Nov 27 18:54:37 crc kubenswrapper[4792]: I1127 18:54:37.305243 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5994f6989f-4s6cj_01e17fe3-0b99-4719-8a19-bdb45dabeaac/manager/0.log" Nov 27 18:54:38 crc kubenswrapper[4792]: I1127 18:54:38.289982 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:54:38 crc kubenswrapper[4792]: I1127 18:54:38.290404 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:55:03 crc kubenswrapper[4792]: E1127 18:55:03.679145 4792 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.214:41710->38.102.83.214:33271: write tcp 38.102.83.214:41710->38.102.83.214:33271: write: broken pipe Nov 27 18:55:08 crc kubenswrapper[4792]: I1127 18:55:08.290415 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:55:08 crc kubenswrapper[4792]: I1127 18:55:08.290994 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.511668 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dj2sk"] Nov 27 18:55:26 crc kubenswrapper[4792]: E1127 18:55:26.512817 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" containerName="registry-server" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.512846 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" containerName="registry-server" Nov 27 18:55:26 crc kubenswrapper[4792]: E1127 18:55:26.512884 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f319d22f-3fd5-4377-87a2-13a3a8d2abab" containerName="extract-content" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.512902 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f319d22f-3fd5-4377-87a2-13a3a8d2abab" containerName="extract-content" Nov 27 18:55:26 crc kubenswrapper[4792]: E1127 18:55:26.512920 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f319d22f-3fd5-4377-87a2-13a3a8d2abab" containerName="extract-utilities" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.512926 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f319d22f-3fd5-4377-87a2-13a3a8d2abab" containerName="extract-utilities" Nov 27 18:55:26 crc kubenswrapper[4792]: E1127 18:55:26.512950 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" containerName="extract-utilities" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.512955 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" containerName="extract-utilities" Nov 27 18:55:26 crc kubenswrapper[4792]: E1127 18:55:26.512973 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f319d22f-3fd5-4377-87a2-13a3a8d2abab" containerName="registry-server" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.512981 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f319d22f-3fd5-4377-87a2-13a3a8d2abab" containerName="registry-server" Nov 27 18:55:26 crc kubenswrapper[4792]: E1127 18:55:26.512996 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" containerName="extract-content" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.513002 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" containerName="extract-content" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.513274 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ad7e58-369e-46e3-8ea0-7802b46be7d3" containerName="registry-server" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.513287 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f319d22f-3fd5-4377-87a2-13a3a8d2abab" containerName="registry-server" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.515443 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.523631 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dj2sk"] Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.588381 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gczxp\" (UniqueName: \"kubernetes.io/projected/3f670609-db43-4147-a325-d20079d7289b-kube-api-access-gczxp\") pod \"community-operators-dj2sk\" (UID: \"3f670609-db43-4147-a325-d20079d7289b\") " pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.588695 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f670609-db43-4147-a325-d20079d7289b-utilities\") pod \"community-operators-dj2sk\" (UID: \"3f670609-db43-4147-a325-d20079d7289b\") " pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.588749 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f670609-db43-4147-a325-d20079d7289b-catalog-content\") pod \"community-operators-dj2sk\" (UID: \"3f670609-db43-4147-a325-d20079d7289b\") " pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.691428 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gczxp\" (UniqueName: \"kubernetes.io/projected/3f670609-db43-4147-a325-d20079d7289b-kube-api-access-gczxp\") pod \"community-operators-dj2sk\" (UID: \"3f670609-db43-4147-a325-d20079d7289b\") " pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.691731 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f670609-db43-4147-a325-d20079d7289b-utilities\") pod \"community-operators-dj2sk\" (UID: \"3f670609-db43-4147-a325-d20079d7289b\") " pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.691801 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f670609-db43-4147-a325-d20079d7289b-catalog-content\") pod \"community-operators-dj2sk\" (UID: \"3f670609-db43-4147-a325-d20079d7289b\") " pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.694744 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f670609-db43-4147-a325-d20079d7289b-utilities\") pod \"community-operators-dj2sk\" (UID: \"3f670609-db43-4147-a325-d20079d7289b\") " pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.697224 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f670609-db43-4147-a325-d20079d7289b-catalog-content\") pod \"community-operators-dj2sk\" (UID: \"3f670609-db43-4147-a325-d20079d7289b\") " pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.717749 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gczxp\" (UniqueName: \"kubernetes.io/projected/3f670609-db43-4147-a325-d20079d7289b-kube-api-access-gczxp\") pod \"community-operators-dj2sk\" (UID: \"3f670609-db43-4147-a325-d20079d7289b\") " pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:26 crc kubenswrapper[4792]: I1127 18:55:26.852995 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:27 crc kubenswrapper[4792]: I1127 18:55:27.667963 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dj2sk"] Nov 27 18:55:27 crc kubenswrapper[4792]: W1127 18:55:27.676446 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f670609_db43_4147_a325_d20079d7289b.slice/crio-3e1aa24a89e628ae194edbf4275d6c0ea46199d0ef0ceb5157ebaf13e7f59ea7 WatchSource:0}: Error finding container 3e1aa24a89e628ae194edbf4275d6c0ea46199d0ef0ceb5157ebaf13e7f59ea7: Status 404 returned error can't find the container with id 3e1aa24a89e628ae194edbf4275d6c0ea46199d0ef0ceb5157ebaf13e7f59ea7 Nov 27 18:55:27 crc kubenswrapper[4792]: I1127 18:55:27.921482 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj2sk" event={"ID":"3f670609-db43-4147-a325-d20079d7289b","Type":"ContainerStarted","Data":"3e1aa24a89e628ae194edbf4275d6c0ea46199d0ef0ceb5157ebaf13e7f59ea7"} Nov 27 18:55:28 crc kubenswrapper[4792]: I1127 18:55:28.934193 4792 generic.go:334] "Generic (PLEG): container finished" podID="3f670609-db43-4147-a325-d20079d7289b" containerID="63ac2f7069d8553a7221e1986ce91b011b056716a107d667b3476bfd9596cc66" exitCode=0 Nov 27 18:55:28 crc kubenswrapper[4792]: I1127 18:55:28.934379 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj2sk" event={"ID":"3f670609-db43-4147-a325-d20079d7289b","Type":"ContainerDied","Data":"63ac2f7069d8553a7221e1986ce91b011b056716a107d667b3476bfd9596cc66"} Nov 27 18:55:28 crc kubenswrapper[4792]: I1127 18:55:28.939779 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 18:55:29 crc kubenswrapper[4792]: I1127 18:55:29.946900 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj2sk" event={"ID":"3f670609-db43-4147-a325-d20079d7289b","Type":"ContainerStarted","Data":"4f0f9c1a0bd275d312de876596940e789a74344c0df77a693f075112ebf0e1f7"} Nov 27 18:55:32 crc kubenswrapper[4792]: I1127 18:55:32.975970 4792 generic.go:334] "Generic (PLEG): container finished" podID="3f670609-db43-4147-a325-d20079d7289b" containerID="4f0f9c1a0bd275d312de876596940e789a74344c0df77a693f075112ebf0e1f7" exitCode=0 Nov 27 18:55:32 crc kubenswrapper[4792]: I1127 18:55:32.976232 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj2sk" event={"ID":"3f670609-db43-4147-a325-d20079d7289b","Type":"ContainerDied","Data":"4f0f9c1a0bd275d312de876596940e789a74344c0df77a693f075112ebf0e1f7"} Nov 27 18:55:33 crc kubenswrapper[4792]: I1127 18:55:33.988246 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj2sk" event={"ID":"3f670609-db43-4147-a325-d20079d7289b","Type":"ContainerStarted","Data":"f8c0423fe86703e03d03fc7845080e42a13b723215822674c6fd84c3fbf4a9cd"} Nov 27 18:55:34 crc kubenswrapper[4792]: I1127 18:55:34.020238 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dj2sk" podStartSLOduration=3.289929968 podStartE2EDuration="8.020217038s" podCreationTimestamp="2025-11-27 18:55:26 +0000 UTC" firstStartedPulling="2025-11-27 18:55:28.936746507 +0000 UTC m=+6351.279572825" lastFinishedPulling="2025-11-27 18:55:33.667033577 +0000 UTC m=+6356.009859895" observedRunningTime="2025-11-27 18:55:34.014561147 +0000 UTC m=+6356.357387465" watchObservedRunningTime="2025-11-27 18:55:34.020217038 +0000 UTC m=+6356.363043356" Nov 27 18:55:36 crc kubenswrapper[4792]: I1127 18:55:36.853190 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:36 crc kubenswrapper[4792]: I1127 18:55:36.853590 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:36 crc kubenswrapper[4792]: I1127 18:55:36.907328 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:38 crc kubenswrapper[4792]: I1127 18:55:38.290727 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 18:55:38 crc kubenswrapper[4792]: I1127 18:55:38.291744 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 18:55:38 crc kubenswrapper[4792]: I1127 18:55:38.291798 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 18:55:38 crc kubenswrapper[4792]: I1127 18:55:38.292786 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 18:55:38 crc kubenswrapper[4792]: I1127 18:55:38.292856 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" gracePeriod=600 Nov 27 18:55:38 crc kubenswrapper[4792]: E1127 18:55:38.985324 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:55:39 crc kubenswrapper[4792]: I1127 18:55:39.044383 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" exitCode=0 Nov 27 18:55:39 crc kubenswrapper[4792]: I1127 18:55:39.044429 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad"} Nov 27 18:55:39 crc kubenswrapper[4792]: I1127 18:55:39.044498 4792 scope.go:117] "RemoveContainer" containerID="00e5eb1067167c84c4609eaef8fcbbad526cbcc37291f40611bc937ef3d3b277" Nov 27 18:55:39 crc kubenswrapper[4792]: I1127 18:55:39.045431 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:55:39 crc kubenswrapper[4792]: E1127 18:55:39.045767 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:55:46 crc kubenswrapper[4792]: I1127 18:55:46.913318 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:46 crc kubenswrapper[4792]: I1127 18:55:46.970006 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dj2sk"] Nov 27 18:55:47 crc kubenswrapper[4792]: I1127 18:55:47.133988 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dj2sk" podUID="3f670609-db43-4147-a325-d20079d7289b" containerName="registry-server" containerID="cri-o://f8c0423fe86703e03d03fc7845080e42a13b723215822674c6fd84c3fbf4a9cd" gracePeriod=2 Nov 27 18:55:47 crc kubenswrapper[4792]: I1127 18:55:47.676154 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:47 crc kubenswrapper[4792]: I1127 18:55:47.856036 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f670609-db43-4147-a325-d20079d7289b-catalog-content\") pod \"3f670609-db43-4147-a325-d20079d7289b\" (UID: \"3f670609-db43-4147-a325-d20079d7289b\") " Nov 27 18:55:47 crc kubenswrapper[4792]: I1127 18:55:47.856480 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f670609-db43-4147-a325-d20079d7289b-utilities\") pod \"3f670609-db43-4147-a325-d20079d7289b\" (UID: \"3f670609-db43-4147-a325-d20079d7289b\") " Nov 27 18:55:47 crc kubenswrapper[4792]: I1127 18:55:47.856716 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gczxp\" (UniqueName: \"kubernetes.io/projected/3f670609-db43-4147-a325-d20079d7289b-kube-api-access-gczxp\") pod \"3f670609-db43-4147-a325-d20079d7289b\" (UID: \"3f670609-db43-4147-a325-d20079d7289b\") " Nov 27 18:55:47 crc kubenswrapper[4792]: I1127 18:55:47.857231 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f670609-db43-4147-a325-d20079d7289b-utilities" (OuterVolumeSpecName: "utilities") pod "3f670609-db43-4147-a325-d20079d7289b" (UID: "3f670609-db43-4147-a325-d20079d7289b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:55:47 crc kubenswrapper[4792]: I1127 18:55:47.857606 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f670609-db43-4147-a325-d20079d7289b-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 18:55:47 crc kubenswrapper[4792]: I1127 18:55:47.870656 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f670609-db43-4147-a325-d20079d7289b-kube-api-access-gczxp" (OuterVolumeSpecName: "kube-api-access-gczxp") pod "3f670609-db43-4147-a325-d20079d7289b" (UID: "3f670609-db43-4147-a325-d20079d7289b"). InnerVolumeSpecName "kube-api-access-gczxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:55:47 crc kubenswrapper[4792]: I1127 18:55:47.907690 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f670609-db43-4147-a325-d20079d7289b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f670609-db43-4147-a325-d20079d7289b" (UID: "3f670609-db43-4147-a325-d20079d7289b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:55:47 crc kubenswrapper[4792]: I1127 18:55:47.959859 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gczxp\" (UniqueName: \"kubernetes.io/projected/3f670609-db43-4147-a325-d20079d7289b-kube-api-access-gczxp\") on node \"crc\" DevicePath \"\"" Nov 27 18:55:47 crc kubenswrapper[4792]: I1127 18:55:47.961900 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f670609-db43-4147-a325-d20079d7289b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.160269 4792 generic.go:334] "Generic (PLEG): container finished" podID="3f670609-db43-4147-a325-d20079d7289b" containerID="f8c0423fe86703e03d03fc7845080e42a13b723215822674c6fd84c3fbf4a9cd" exitCode=0 Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.160332 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj2sk" event={"ID":"3f670609-db43-4147-a325-d20079d7289b","Type":"ContainerDied","Data":"f8c0423fe86703e03d03fc7845080e42a13b723215822674c6fd84c3fbf4a9cd"} Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.160592 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dj2sk" event={"ID":"3f670609-db43-4147-a325-d20079d7289b","Type":"ContainerDied","Data":"3e1aa24a89e628ae194edbf4275d6c0ea46199d0ef0ceb5157ebaf13e7f59ea7"} Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.160617 4792 scope.go:117] "RemoveContainer" containerID="f8c0423fe86703e03d03fc7845080e42a13b723215822674c6fd84c3fbf4a9cd" Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.160383 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dj2sk" Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.211141 4792 scope.go:117] "RemoveContainer" containerID="4f0f9c1a0bd275d312de876596940e789a74344c0df77a693f075112ebf0e1f7" Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.211335 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dj2sk"] Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.226346 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dj2sk"] Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.237786 4792 scope.go:117] "RemoveContainer" containerID="63ac2f7069d8553a7221e1986ce91b011b056716a107d667b3476bfd9596cc66" Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.327534 4792 scope.go:117] "RemoveContainer" containerID="f8c0423fe86703e03d03fc7845080e42a13b723215822674c6fd84c3fbf4a9cd" Nov 27 18:55:48 crc kubenswrapper[4792]: E1127 18:55:48.327980 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8c0423fe86703e03d03fc7845080e42a13b723215822674c6fd84c3fbf4a9cd\": container with ID starting with f8c0423fe86703e03d03fc7845080e42a13b723215822674c6fd84c3fbf4a9cd not found: ID does not exist" containerID="f8c0423fe86703e03d03fc7845080e42a13b723215822674c6fd84c3fbf4a9cd" Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.328023 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8c0423fe86703e03d03fc7845080e42a13b723215822674c6fd84c3fbf4a9cd"} err="failed to get container status \"f8c0423fe86703e03d03fc7845080e42a13b723215822674c6fd84c3fbf4a9cd\": rpc error: code = NotFound desc = could not find container \"f8c0423fe86703e03d03fc7845080e42a13b723215822674c6fd84c3fbf4a9cd\": container with ID starting with f8c0423fe86703e03d03fc7845080e42a13b723215822674c6fd84c3fbf4a9cd not found: ID does not exist" Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.328056 4792 scope.go:117] "RemoveContainer" containerID="4f0f9c1a0bd275d312de876596940e789a74344c0df77a693f075112ebf0e1f7" Nov 27 18:55:48 crc kubenswrapper[4792]: E1127 18:55:48.328563 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f0f9c1a0bd275d312de876596940e789a74344c0df77a693f075112ebf0e1f7\": container with ID starting with 4f0f9c1a0bd275d312de876596940e789a74344c0df77a693f075112ebf0e1f7 not found: ID does not exist" containerID="4f0f9c1a0bd275d312de876596940e789a74344c0df77a693f075112ebf0e1f7" Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.328751 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f0f9c1a0bd275d312de876596940e789a74344c0df77a693f075112ebf0e1f7"} err="failed to get container status \"4f0f9c1a0bd275d312de876596940e789a74344c0df77a693f075112ebf0e1f7\": rpc error: code = NotFound desc = could not find container \"4f0f9c1a0bd275d312de876596940e789a74344c0df77a693f075112ebf0e1f7\": container with ID starting with 4f0f9c1a0bd275d312de876596940e789a74344c0df77a693f075112ebf0e1f7 not found: ID does not exist" Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.328866 4792 scope.go:117] "RemoveContainer" containerID="63ac2f7069d8553a7221e1986ce91b011b056716a107d667b3476bfd9596cc66" Nov 27 18:55:48 crc kubenswrapper[4792]: E1127 18:55:48.329492 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63ac2f7069d8553a7221e1986ce91b011b056716a107d667b3476bfd9596cc66\": container with ID starting with 63ac2f7069d8553a7221e1986ce91b011b056716a107d667b3476bfd9596cc66 not found: ID does not exist" containerID="63ac2f7069d8553a7221e1986ce91b011b056716a107d667b3476bfd9596cc66" Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.329523 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63ac2f7069d8553a7221e1986ce91b011b056716a107d667b3476bfd9596cc66"} err="failed to get container status \"63ac2f7069d8553a7221e1986ce91b011b056716a107d667b3476bfd9596cc66\": rpc error: code = NotFound desc = could not find container \"63ac2f7069d8553a7221e1986ce91b011b056716a107d667b3476bfd9596cc66\": container with ID starting with 63ac2f7069d8553a7221e1986ce91b011b056716a107d667b3476bfd9596cc66 not found: ID does not exist" Nov 27 18:55:48 crc kubenswrapper[4792]: I1127 18:55:48.700151 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f670609-db43-4147-a325-d20079d7289b" path="/var/lib/kubelet/pods/3f670609-db43-4147-a325-d20079d7289b/volumes" Nov 27 18:55:53 crc kubenswrapper[4792]: I1127 18:55:53.686845 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:55:53 crc kubenswrapper[4792]: E1127 18:55:53.688748 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:56:08 crc kubenswrapper[4792]: I1127 18:56:08.698018 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:56:08 crc kubenswrapper[4792]: E1127 18:56:08.699009 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:56:23 crc kubenswrapper[4792]: I1127 18:56:23.687098 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:56:23 crc kubenswrapper[4792]: E1127 18:56:23.687889 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:56:35 crc kubenswrapper[4792]: I1127 18:56:35.687387 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:56:35 crc kubenswrapper[4792]: E1127 18:56:35.688402 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:56:39 crc kubenswrapper[4792]: I1127 18:56:39.718356 4792 generic.go:334] "Generic (PLEG): container finished" podID="288241f0-8105-4cd4-9e5b-b3abb299e2ef" containerID="fbd2364d7a81e7a919fef0351722b0fb6769db5de1111fc85d0a5b0bdf0afff4" exitCode=0 Nov 27 18:56:39 crc kubenswrapper[4792]: I1127 18:56:39.718480 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4htl/must-gather-csjgx" event={"ID":"288241f0-8105-4cd4-9e5b-b3abb299e2ef","Type":"ContainerDied","Data":"fbd2364d7a81e7a919fef0351722b0fb6769db5de1111fc85d0a5b0bdf0afff4"} Nov 27 18:56:39 crc kubenswrapper[4792]: I1127 18:56:39.720393 4792 scope.go:117] "RemoveContainer" containerID="fbd2364d7a81e7a919fef0351722b0fb6769db5de1111fc85d0a5b0bdf0afff4" Nov 27 18:56:40 crc kubenswrapper[4792]: I1127 18:56:40.427937 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c4htl_must-gather-csjgx_288241f0-8105-4cd4-9e5b-b3abb299e2ef/gather/0.log" Nov 27 18:56:47 crc kubenswrapper[4792]: I1127 18:56:47.686688 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:56:47 crc kubenswrapper[4792]: E1127 18:56:47.687547 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:56:48 crc kubenswrapper[4792]: I1127 18:56:48.268682 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c4htl/must-gather-csjgx"] Nov 27 18:56:48 crc kubenswrapper[4792]: I1127 18:56:48.287724 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-c4htl/must-gather-csjgx" podUID="288241f0-8105-4cd4-9e5b-b3abb299e2ef" containerName="copy" containerID="cri-o://2fbbb649966ccb11c0c231f4bfc9e198ddd089bc202fcf9152b57ddcd4c67e99" gracePeriod=2 Nov 27 18:56:48 crc kubenswrapper[4792]: I1127 18:56:48.312198 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c4htl/must-gather-csjgx"] Nov 27 18:56:48 crc kubenswrapper[4792]: I1127 18:56:48.814932 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c4htl_must-gather-csjgx_288241f0-8105-4cd4-9e5b-b3abb299e2ef/copy/0.log" Nov 27 18:56:48 crc kubenswrapper[4792]: I1127 18:56:48.815395 4792 generic.go:334] "Generic (PLEG): container finished" podID="288241f0-8105-4cd4-9e5b-b3abb299e2ef" containerID="2fbbb649966ccb11c0c231f4bfc9e198ddd089bc202fcf9152b57ddcd4c67e99" exitCode=143 Nov 27 18:56:48 crc kubenswrapper[4792]: I1127 18:56:48.815441 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2501799a9d5b94830673951f5c141fc1ac6ad2a959d4ad403f802996ca3acd3d" Nov 27 18:56:48 crc kubenswrapper[4792]: I1127 18:56:48.837002 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c4htl_must-gather-csjgx_288241f0-8105-4cd4-9e5b-b3abb299e2ef/copy/0.log" Nov 27 18:56:48 crc kubenswrapper[4792]: I1127 18:56:48.837272 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/must-gather-csjgx" Nov 27 18:56:48 crc kubenswrapper[4792]: I1127 18:56:48.957392 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq2qh\" (UniqueName: \"kubernetes.io/projected/288241f0-8105-4cd4-9e5b-b3abb299e2ef-kube-api-access-zq2qh\") pod \"288241f0-8105-4cd4-9e5b-b3abb299e2ef\" (UID: \"288241f0-8105-4cd4-9e5b-b3abb299e2ef\") " Nov 27 18:56:48 crc kubenswrapper[4792]: I1127 18:56:48.957676 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/288241f0-8105-4cd4-9e5b-b3abb299e2ef-must-gather-output\") pod \"288241f0-8105-4cd4-9e5b-b3abb299e2ef\" (UID: \"288241f0-8105-4cd4-9e5b-b3abb299e2ef\") " Nov 27 18:56:48 crc kubenswrapper[4792]: I1127 18:56:48.964885 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/288241f0-8105-4cd4-9e5b-b3abb299e2ef-kube-api-access-zq2qh" (OuterVolumeSpecName: "kube-api-access-zq2qh") pod "288241f0-8105-4cd4-9e5b-b3abb299e2ef" (UID: "288241f0-8105-4cd4-9e5b-b3abb299e2ef"). InnerVolumeSpecName "kube-api-access-zq2qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 18:56:49 crc kubenswrapper[4792]: I1127 18:56:49.060867 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq2qh\" (UniqueName: \"kubernetes.io/projected/288241f0-8105-4cd4-9e5b-b3abb299e2ef-kube-api-access-zq2qh\") on node \"crc\" DevicePath \"\"" Nov 27 18:56:49 crc kubenswrapper[4792]: I1127 18:56:49.145723 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/288241f0-8105-4cd4-9e5b-b3abb299e2ef-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "288241f0-8105-4cd4-9e5b-b3abb299e2ef" (UID: "288241f0-8105-4cd4-9e5b-b3abb299e2ef"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 18:56:49 crc kubenswrapper[4792]: I1127 18:56:49.165269 4792 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/288241f0-8105-4cd4-9e5b-b3abb299e2ef-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 27 18:56:49 crc kubenswrapper[4792]: I1127 18:56:49.826282 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4htl/must-gather-csjgx" Nov 27 18:56:50 crc kubenswrapper[4792]: I1127 18:56:50.706817 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="288241f0-8105-4cd4-9e5b-b3abb299e2ef" path="/var/lib/kubelet/pods/288241f0-8105-4cd4-9e5b-b3abb299e2ef/volumes" Nov 27 18:57:01 crc kubenswrapper[4792]: I1127 18:57:01.686817 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:57:01 crc kubenswrapper[4792]: E1127 18:57:01.687687 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:57:09 crc kubenswrapper[4792]: I1127 18:57:09.500347 4792 scope.go:117] "RemoveContainer" containerID="2fbbb649966ccb11c0c231f4bfc9e198ddd089bc202fcf9152b57ddcd4c67e99" Nov 27 18:57:09 crc kubenswrapper[4792]: I1127 18:57:09.536527 4792 scope.go:117] "RemoveContainer" containerID="fbd2364d7a81e7a919fef0351722b0fb6769db5de1111fc85d0a5b0bdf0afff4" Nov 27 18:57:14 crc kubenswrapper[4792]: I1127 18:57:14.702104 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:57:14 crc kubenswrapper[4792]: E1127 18:57:14.703302 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:57:26 crc kubenswrapper[4792]: I1127 18:57:26.687316 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:57:26 crc kubenswrapper[4792]: E1127 18:57:26.688946 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:57:38 crc kubenswrapper[4792]: I1127 18:57:38.695563 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:57:38 crc kubenswrapper[4792]: E1127 18:57:38.696254 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:57:50 crc kubenswrapper[4792]: I1127 18:57:50.687081 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:57:50 crc kubenswrapper[4792]: E1127 18:57:50.687811 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:58:02 crc kubenswrapper[4792]: I1127 18:58:02.686893 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:58:02 crc kubenswrapper[4792]: E1127 18:58:02.688138 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:58:13 crc kubenswrapper[4792]: I1127 18:58:13.689167 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:58:13 crc kubenswrapper[4792]: E1127 18:58:13.691124 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:58:27 crc kubenswrapper[4792]: I1127 18:58:27.686730 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:58:27 crc kubenswrapper[4792]: E1127 18:58:27.687420 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:58:42 crc kubenswrapper[4792]: I1127 18:58:42.687097 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:58:42 crc kubenswrapper[4792]: E1127 18:58:42.687957 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:58:53 crc kubenswrapper[4792]: I1127 18:58:53.686528 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:58:53 crc kubenswrapper[4792]: E1127 18:58:53.687322 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:59:06 crc kubenswrapper[4792]: I1127 18:59:06.687199 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:59:06 crc kubenswrapper[4792]: E1127 18:59:06.688158 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:59:21 crc kubenswrapper[4792]: I1127 18:59:21.690295 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:59:21 crc kubenswrapper[4792]: E1127 18:59:21.691175 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:59:34 crc kubenswrapper[4792]: I1127 18:59:34.686350 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:59:34 crc kubenswrapper[4792]: E1127 18:59:34.687235 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:59:47 crc kubenswrapper[4792]: I1127 18:59:47.688343 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:59:47 crc kubenswrapper[4792]: E1127 18:59:47.689861 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 18:59:47 crc kubenswrapper[4792]: I1127 18:59:47.932420 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lmr6d/must-gather-6vcwp"] Nov 27 18:59:47 crc kubenswrapper[4792]: E1127 18:59:47.932935 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f670609-db43-4147-a325-d20079d7289b" containerName="extract-utilities" Nov 27 18:59:47 crc kubenswrapper[4792]: I1127 18:59:47.932953 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f670609-db43-4147-a325-d20079d7289b" containerName="extract-utilities" Nov 27 18:59:47 crc kubenswrapper[4792]: E1127 18:59:47.932967 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="288241f0-8105-4cd4-9e5b-b3abb299e2ef" containerName="gather" Nov 27 18:59:47 crc kubenswrapper[4792]: I1127 18:59:47.932975 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="288241f0-8105-4cd4-9e5b-b3abb299e2ef" containerName="gather" Nov 27 18:59:47 crc kubenswrapper[4792]: E1127 18:59:47.933018 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f670609-db43-4147-a325-d20079d7289b" containerName="registry-server" Nov 27 18:59:47 crc kubenswrapper[4792]: I1127 18:59:47.933026 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f670609-db43-4147-a325-d20079d7289b" containerName="registry-server" Nov 27 18:59:47 crc kubenswrapper[4792]: E1127 18:59:47.933043 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="288241f0-8105-4cd4-9e5b-b3abb299e2ef" containerName="copy" Nov 27 18:59:47 crc kubenswrapper[4792]: I1127 18:59:47.933051 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="288241f0-8105-4cd4-9e5b-b3abb299e2ef" containerName="copy" Nov 27 18:59:47 crc kubenswrapper[4792]: E1127 18:59:47.933064 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f670609-db43-4147-a325-d20079d7289b" containerName="extract-content" Nov 27 18:59:47 crc kubenswrapper[4792]: I1127 18:59:47.933070 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f670609-db43-4147-a325-d20079d7289b" containerName="extract-content" Nov 27 18:59:47 crc kubenswrapper[4792]: I1127 18:59:47.933311 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f670609-db43-4147-a325-d20079d7289b" containerName="registry-server" Nov 27 18:59:47 crc kubenswrapper[4792]: I1127 18:59:47.933344 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="288241f0-8105-4cd4-9e5b-b3abb299e2ef" containerName="gather" Nov 27 18:59:47 crc kubenswrapper[4792]: I1127 18:59:47.933367 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="288241f0-8105-4cd4-9e5b-b3abb299e2ef" containerName="copy" Nov 27 18:59:47 crc kubenswrapper[4792]: I1127 18:59:47.934940 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/must-gather-6vcwp" Nov 27 18:59:47 crc kubenswrapper[4792]: I1127 18:59:47.955375 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lmr6d"/"kube-root-ca.crt" Nov 27 18:59:47 crc kubenswrapper[4792]: I1127 18:59:47.957169 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-lmr6d"/"openshift-service-ca.crt" Nov 27 18:59:47 crc kubenswrapper[4792]: I1127 18:59:47.963849 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-lmr6d"/"default-dockercfg-42vs4" Nov 27 18:59:48 crc kubenswrapper[4792]: I1127 18:59:48.049807 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lmr6d/must-gather-6vcwp"] Nov 27 18:59:48 crc kubenswrapper[4792]: I1127 18:59:48.054092 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgjm8\" (UniqueName: \"kubernetes.io/projected/09b4a162-3614-4ae6-8ff2-05169fed8b06-kube-api-access-mgjm8\") pod \"must-gather-6vcwp\" (UID: \"09b4a162-3614-4ae6-8ff2-05169fed8b06\") " pod="openshift-must-gather-lmr6d/must-gather-6vcwp" Nov 27 18:59:48 crc kubenswrapper[4792]: I1127 18:59:48.054223 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/09b4a162-3614-4ae6-8ff2-05169fed8b06-must-gather-output\") pod \"must-gather-6vcwp\" (UID: \"09b4a162-3614-4ae6-8ff2-05169fed8b06\") " pod="openshift-must-gather-lmr6d/must-gather-6vcwp" Nov 27 18:59:48 crc kubenswrapper[4792]: I1127 18:59:48.158050 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgjm8\" (UniqueName: \"kubernetes.io/projected/09b4a162-3614-4ae6-8ff2-05169fed8b06-kube-api-access-mgjm8\") pod \"must-gather-6vcwp\" (UID: \"09b4a162-3614-4ae6-8ff2-05169fed8b06\") " pod="openshift-must-gather-lmr6d/must-gather-6vcwp" Nov 27 18:59:48 crc kubenswrapper[4792]: I1127 18:59:48.158193 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/09b4a162-3614-4ae6-8ff2-05169fed8b06-must-gather-output\") pod \"must-gather-6vcwp\" (UID: \"09b4a162-3614-4ae6-8ff2-05169fed8b06\") " pod="openshift-must-gather-lmr6d/must-gather-6vcwp" Nov 27 18:59:48 crc kubenswrapper[4792]: I1127 18:59:48.158810 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/09b4a162-3614-4ae6-8ff2-05169fed8b06-must-gather-output\") pod \"must-gather-6vcwp\" (UID: \"09b4a162-3614-4ae6-8ff2-05169fed8b06\") " pod="openshift-must-gather-lmr6d/must-gather-6vcwp" Nov 27 18:59:48 crc kubenswrapper[4792]: I1127 18:59:48.197406 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgjm8\" (UniqueName: \"kubernetes.io/projected/09b4a162-3614-4ae6-8ff2-05169fed8b06-kube-api-access-mgjm8\") pod \"must-gather-6vcwp\" (UID: \"09b4a162-3614-4ae6-8ff2-05169fed8b06\") " pod="openshift-must-gather-lmr6d/must-gather-6vcwp" Nov 27 18:59:48 crc kubenswrapper[4792]: I1127 18:59:48.265904 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/must-gather-6vcwp" Nov 27 18:59:48 crc kubenswrapper[4792]: I1127 18:59:48.804675 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lmr6d/must-gather-6vcwp"] Nov 27 18:59:49 crc kubenswrapper[4792]: I1127 18:59:49.455052 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmr6d/must-gather-6vcwp" event={"ID":"09b4a162-3614-4ae6-8ff2-05169fed8b06","Type":"ContainerStarted","Data":"6d1b46823b28d66e4abebcb6af05d70353a125206bbc131fd46c700d558b7d7f"} Nov 27 18:59:49 crc kubenswrapper[4792]: I1127 18:59:49.455574 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmr6d/must-gather-6vcwp" event={"ID":"09b4a162-3614-4ae6-8ff2-05169fed8b06","Type":"ContainerStarted","Data":"54b4835700e6756bb54251b360e0ae559a1849abb15e94fda39aa72f4a7d83bd"} Nov 27 18:59:50 crc kubenswrapper[4792]: I1127 18:59:50.467146 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmr6d/must-gather-6vcwp" event={"ID":"09b4a162-3614-4ae6-8ff2-05169fed8b06","Type":"ContainerStarted","Data":"fe4c60f4728578aea1b35731758b4bcee76c8265304611f299427ce0e7c07ab9"} Nov 27 18:59:52 crc kubenswrapper[4792]: E1127 18:59:52.389624 4792 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.214:50816->38.102.83.214:33271: read tcp 38.102.83.214:50816->38.102.83.214:33271: read: connection reset by peer Nov 27 18:59:53 crc kubenswrapper[4792]: I1127 18:59:53.099931 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lmr6d/must-gather-6vcwp" podStartSLOduration=6.099905694 podStartE2EDuration="6.099905694s" podCreationTimestamp="2025-11-27 18:59:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 18:59:50.483395251 +0000 UTC m=+6612.826221579" watchObservedRunningTime="2025-11-27 18:59:53.099905694 +0000 UTC m=+6615.442732012" Nov 27 18:59:53 crc kubenswrapper[4792]: I1127 18:59:53.106449 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lmr6d/crc-debug-5wcq9"] Nov 27 18:59:53 crc kubenswrapper[4792]: I1127 18:59:53.108415 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/crc-debug-5wcq9" Nov 27 18:59:53 crc kubenswrapper[4792]: I1127 18:59:53.199684 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1df8c29c-1eb7-4e9d-9712-322fda3b98c5-host\") pod \"crc-debug-5wcq9\" (UID: \"1df8c29c-1eb7-4e9d-9712-322fda3b98c5\") " pod="openshift-must-gather-lmr6d/crc-debug-5wcq9" Nov 27 18:59:53 crc kubenswrapper[4792]: I1127 18:59:53.199753 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92gns\" (UniqueName: \"kubernetes.io/projected/1df8c29c-1eb7-4e9d-9712-322fda3b98c5-kube-api-access-92gns\") pod \"crc-debug-5wcq9\" (UID: \"1df8c29c-1eb7-4e9d-9712-322fda3b98c5\") " pod="openshift-must-gather-lmr6d/crc-debug-5wcq9" Nov 27 18:59:53 crc kubenswrapper[4792]: I1127 18:59:53.302437 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1df8c29c-1eb7-4e9d-9712-322fda3b98c5-host\") pod \"crc-debug-5wcq9\" (UID: \"1df8c29c-1eb7-4e9d-9712-322fda3b98c5\") " pod="openshift-must-gather-lmr6d/crc-debug-5wcq9" Nov 27 18:59:53 crc kubenswrapper[4792]: I1127 18:59:53.302575 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92gns\" (UniqueName: \"kubernetes.io/projected/1df8c29c-1eb7-4e9d-9712-322fda3b98c5-kube-api-access-92gns\") pod \"crc-debug-5wcq9\" (UID: \"1df8c29c-1eb7-4e9d-9712-322fda3b98c5\") " pod="openshift-must-gather-lmr6d/crc-debug-5wcq9" Nov 27 18:59:53 crc kubenswrapper[4792]: I1127 18:59:53.303337 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1df8c29c-1eb7-4e9d-9712-322fda3b98c5-host\") pod \"crc-debug-5wcq9\" (UID: \"1df8c29c-1eb7-4e9d-9712-322fda3b98c5\") " pod="openshift-must-gather-lmr6d/crc-debug-5wcq9" Nov 27 18:59:53 crc kubenswrapper[4792]: I1127 18:59:53.327452 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92gns\" (UniqueName: \"kubernetes.io/projected/1df8c29c-1eb7-4e9d-9712-322fda3b98c5-kube-api-access-92gns\") pod \"crc-debug-5wcq9\" (UID: \"1df8c29c-1eb7-4e9d-9712-322fda3b98c5\") " pod="openshift-must-gather-lmr6d/crc-debug-5wcq9" Nov 27 18:59:53 crc kubenswrapper[4792]: I1127 18:59:53.432086 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/crc-debug-5wcq9" Nov 27 18:59:53 crc kubenswrapper[4792]: W1127 18:59:53.464574 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1df8c29c_1eb7_4e9d_9712_322fda3b98c5.slice/crio-d31b5eb7d77a28fce7ccbf84045013cd745359008d717a45547ca1916a17cda6 WatchSource:0}: Error finding container d31b5eb7d77a28fce7ccbf84045013cd745359008d717a45547ca1916a17cda6: Status 404 returned error can't find the container with id d31b5eb7d77a28fce7ccbf84045013cd745359008d717a45547ca1916a17cda6 Nov 27 18:59:53 crc kubenswrapper[4792]: I1127 18:59:53.523582 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmr6d/crc-debug-5wcq9" event={"ID":"1df8c29c-1eb7-4e9d-9712-322fda3b98c5","Type":"ContainerStarted","Data":"d31b5eb7d77a28fce7ccbf84045013cd745359008d717a45547ca1916a17cda6"} Nov 27 18:59:54 crc kubenswrapper[4792]: I1127 18:59:54.535369 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmr6d/crc-debug-5wcq9" event={"ID":"1df8c29c-1eb7-4e9d-9712-322fda3b98c5","Type":"ContainerStarted","Data":"c044ecfe5603a45632abadb11eb7f3280bcc25fe2fb5ab845a150110b1ce1e2a"} Nov 27 18:59:54 crc kubenswrapper[4792]: I1127 18:59:54.566766 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lmr6d/crc-debug-5wcq9" podStartSLOduration=1.566740748 podStartE2EDuration="1.566740748s" podCreationTimestamp="2025-11-27 18:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 18:59:54.549833647 +0000 UTC m=+6616.892659965" watchObservedRunningTime="2025-11-27 18:59:54.566740748 +0000 UTC m=+6616.909567066" Nov 27 18:59:58 crc kubenswrapper[4792]: I1127 18:59:58.694846 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 18:59:58 crc kubenswrapper[4792]: E1127 18:59:58.695841 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:00:00 crc kubenswrapper[4792]: I1127 19:00:00.177363 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c"] Nov 27 19:00:00 crc kubenswrapper[4792]: I1127 19:00:00.179529 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" Nov 27 19:00:00 crc kubenswrapper[4792]: I1127 19:00:00.182252 4792 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 27 19:00:00 crc kubenswrapper[4792]: I1127 19:00:00.186263 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kcqn\" (UniqueName: \"kubernetes.io/projected/642b4f85-dac7-4c6f-b4b5-499d4759242c-kube-api-access-5kcqn\") pod \"collect-profiles-29404500-ccf6c\" (UID: \"642b4f85-dac7-4c6f-b4b5-499d4759242c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" Nov 27 19:00:00 crc kubenswrapper[4792]: I1127 19:00:00.186315 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/642b4f85-dac7-4c6f-b4b5-499d4759242c-secret-volume\") pod \"collect-profiles-29404500-ccf6c\" (UID: \"642b4f85-dac7-4c6f-b4b5-499d4759242c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" Nov 27 19:00:00 crc kubenswrapper[4792]: I1127 19:00:00.186367 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/642b4f85-dac7-4c6f-b4b5-499d4759242c-config-volume\") pod \"collect-profiles-29404500-ccf6c\" (UID: \"642b4f85-dac7-4c6f-b4b5-499d4759242c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" Nov 27 19:00:00 crc kubenswrapper[4792]: I1127 19:00:00.193702 4792 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 27 19:00:00 crc kubenswrapper[4792]: I1127 19:00:00.196207 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c"] Nov 27 19:00:00 crc kubenswrapper[4792]: I1127 19:00:00.289310 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kcqn\" (UniqueName: \"kubernetes.io/projected/642b4f85-dac7-4c6f-b4b5-499d4759242c-kube-api-access-5kcqn\") pod \"collect-profiles-29404500-ccf6c\" (UID: \"642b4f85-dac7-4c6f-b4b5-499d4759242c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" Nov 27 19:00:00 crc kubenswrapper[4792]: I1127 19:00:00.289695 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/642b4f85-dac7-4c6f-b4b5-499d4759242c-secret-volume\") pod \"collect-profiles-29404500-ccf6c\" (UID: \"642b4f85-dac7-4c6f-b4b5-499d4759242c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" Nov 27 19:00:00 crc kubenswrapper[4792]: I1127 19:00:00.289750 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/642b4f85-dac7-4c6f-b4b5-499d4759242c-config-volume\") pod \"collect-profiles-29404500-ccf6c\" (UID: \"642b4f85-dac7-4c6f-b4b5-499d4759242c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" Nov 27 19:00:00 crc kubenswrapper[4792]: I1127 19:00:00.290752 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/642b4f85-dac7-4c6f-b4b5-499d4759242c-config-volume\") pod \"collect-profiles-29404500-ccf6c\" (UID: \"642b4f85-dac7-4c6f-b4b5-499d4759242c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" Nov 27 19:00:00 crc kubenswrapper[4792]: I1127 19:00:00.294918 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/642b4f85-dac7-4c6f-b4b5-499d4759242c-secret-volume\") pod \"collect-profiles-29404500-ccf6c\" (UID: \"642b4f85-dac7-4c6f-b4b5-499d4759242c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" Nov 27 19:00:00 crc kubenswrapper[4792]: I1127 19:00:00.315349 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kcqn\" (UniqueName: \"kubernetes.io/projected/642b4f85-dac7-4c6f-b4b5-499d4759242c-kube-api-access-5kcqn\") pod \"collect-profiles-29404500-ccf6c\" (UID: \"642b4f85-dac7-4c6f-b4b5-499d4759242c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" Nov 27 19:00:00 crc kubenswrapper[4792]: I1127 19:00:00.511252 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" Nov 27 19:00:01 crc kubenswrapper[4792]: I1127 19:00:01.051384 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c"] Nov 27 19:00:01 crc kubenswrapper[4792]: I1127 19:00:01.628589 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" event={"ID":"642b4f85-dac7-4c6f-b4b5-499d4759242c","Type":"ContainerStarted","Data":"61dfdf87192c6aa879b8c7a18f1f054bea04ad805847128e96d7e54d4e8ff754"} Nov 27 19:00:01 crc kubenswrapper[4792]: I1127 19:00:01.629153 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" event={"ID":"642b4f85-dac7-4c6f-b4b5-499d4759242c","Type":"ContainerStarted","Data":"450b01c7e13269af04b5f3fec0c0ed7bdefe141b28bbdc33c9fb4c8ea0711dd8"} Nov 27 19:00:01 crc kubenswrapper[4792]: I1127 19:00:01.654165 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" podStartSLOduration=1.6541452589999999 podStartE2EDuration="1.654145259s" podCreationTimestamp="2025-11-27 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 19:00:01.643563816 +0000 UTC m=+6623.986390134" watchObservedRunningTime="2025-11-27 19:00:01.654145259 +0000 UTC m=+6623.996971577" Nov 27 19:00:02 crc kubenswrapper[4792]: I1127 19:00:02.641270 4792 generic.go:334] "Generic (PLEG): container finished" podID="642b4f85-dac7-4c6f-b4b5-499d4759242c" containerID="61dfdf87192c6aa879b8c7a18f1f054bea04ad805847128e96d7e54d4e8ff754" exitCode=0 Nov 27 19:00:02 crc kubenswrapper[4792]: I1127 19:00:02.642749 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" event={"ID":"642b4f85-dac7-4c6f-b4b5-499d4759242c","Type":"ContainerDied","Data":"61dfdf87192c6aa879b8c7a18f1f054bea04ad805847128e96d7e54d4e8ff754"} Nov 27 19:00:04 crc kubenswrapper[4792]: I1127 19:00:04.663717 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" event={"ID":"642b4f85-dac7-4c6f-b4b5-499d4759242c","Type":"ContainerDied","Data":"450b01c7e13269af04b5f3fec0c0ed7bdefe141b28bbdc33c9fb4c8ea0711dd8"} Nov 27 19:00:04 crc kubenswrapper[4792]: I1127 19:00:04.664239 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="450b01c7e13269af04b5f3fec0c0ed7bdefe141b28bbdc33c9fb4c8ea0711dd8" Nov 27 19:00:05 crc kubenswrapper[4792]: I1127 19:00:05.975319 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" Nov 27 19:00:06 crc kubenswrapper[4792]: I1127 19:00:06.056738 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/642b4f85-dac7-4c6f-b4b5-499d4759242c-secret-volume\") pod \"642b4f85-dac7-4c6f-b4b5-499d4759242c\" (UID: \"642b4f85-dac7-4c6f-b4b5-499d4759242c\") " Nov 27 19:00:06 crc kubenswrapper[4792]: I1127 19:00:06.056839 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kcqn\" (UniqueName: \"kubernetes.io/projected/642b4f85-dac7-4c6f-b4b5-499d4759242c-kube-api-access-5kcqn\") pod \"642b4f85-dac7-4c6f-b4b5-499d4759242c\" (UID: \"642b4f85-dac7-4c6f-b4b5-499d4759242c\") " Nov 27 19:00:06 crc kubenswrapper[4792]: I1127 19:00:06.057016 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/642b4f85-dac7-4c6f-b4b5-499d4759242c-config-volume\") pod \"642b4f85-dac7-4c6f-b4b5-499d4759242c\" (UID: \"642b4f85-dac7-4c6f-b4b5-499d4759242c\") " Nov 27 19:00:06 crc kubenswrapper[4792]: I1127 19:00:06.058778 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/642b4f85-dac7-4c6f-b4b5-499d4759242c-config-volume" (OuterVolumeSpecName: "config-volume") pod "642b4f85-dac7-4c6f-b4b5-499d4759242c" (UID: "642b4f85-dac7-4c6f-b4b5-499d4759242c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 27 19:00:06 crc kubenswrapper[4792]: I1127 19:00:06.072575 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/642b4f85-dac7-4c6f-b4b5-499d4759242c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "642b4f85-dac7-4c6f-b4b5-499d4759242c" (UID: "642b4f85-dac7-4c6f-b4b5-499d4759242c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 19:00:06 crc kubenswrapper[4792]: I1127 19:00:06.075710 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/642b4f85-dac7-4c6f-b4b5-499d4759242c-kube-api-access-5kcqn" (OuterVolumeSpecName: "kube-api-access-5kcqn") pod "642b4f85-dac7-4c6f-b4b5-499d4759242c" (UID: "642b4f85-dac7-4c6f-b4b5-499d4759242c"). InnerVolumeSpecName "kube-api-access-5kcqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 19:00:06 crc kubenswrapper[4792]: I1127 19:00:06.159814 4792 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/642b4f85-dac7-4c6f-b4b5-499d4759242c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 27 19:00:06 crc kubenswrapper[4792]: I1127 19:00:06.159852 4792 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/642b4f85-dac7-4c6f-b4b5-499d4759242c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 27 19:00:06 crc kubenswrapper[4792]: I1127 19:00:06.159864 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kcqn\" (UniqueName: \"kubernetes.io/projected/642b4f85-dac7-4c6f-b4b5-499d4759242c-kube-api-access-5kcqn\") on node \"crc\" DevicePath \"\"" Nov 27 19:00:06 crc kubenswrapper[4792]: I1127 19:00:06.684690 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29404500-ccf6c" Nov 27 19:00:07 crc kubenswrapper[4792]: I1127 19:00:07.061858 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd"] Nov 27 19:00:07 crc kubenswrapper[4792]: I1127 19:00:07.073942 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29404455-rgbwd"] Nov 27 19:00:08 crc kubenswrapper[4792]: I1127 19:00:08.703162 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c6e5c9f-cb09-413c-b804-a37b4fa3df59" path="/var/lib/kubelet/pods/0c6e5c9f-cb09-413c-b804-a37b4fa3df59/volumes" Nov 27 19:00:09 crc kubenswrapper[4792]: I1127 19:00:09.686788 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 19:00:09 crc kubenswrapper[4792]: E1127 19:00:09.687575 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:00:09 crc kubenswrapper[4792]: I1127 19:00:09.700119 4792 scope.go:117] "RemoveContainer" containerID="586dd0be80654f40da1f21276eef65e0c5c2ca707c276738252e2fb278ecf1bb" Nov 27 19:00:21 crc kubenswrapper[4792]: I1127 19:00:21.686945 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 19:00:21 crc kubenswrapper[4792]: E1127 19:00:21.687790 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:00:33 crc kubenswrapper[4792]: I1127 19:00:33.687278 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 19:00:33 crc kubenswrapper[4792]: E1127 19:00:33.688079 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:00:40 crc kubenswrapper[4792]: I1127 19:00:40.434220 4792 generic.go:334] "Generic (PLEG): container finished" podID="1df8c29c-1eb7-4e9d-9712-322fda3b98c5" containerID="c044ecfe5603a45632abadb11eb7f3280bcc25fe2fb5ab845a150110b1ce1e2a" exitCode=0 Nov 27 19:00:40 crc kubenswrapper[4792]: I1127 19:00:40.434666 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmr6d/crc-debug-5wcq9" event={"ID":"1df8c29c-1eb7-4e9d-9712-322fda3b98c5","Type":"ContainerDied","Data":"c044ecfe5603a45632abadb11eb7f3280bcc25fe2fb5ab845a150110b1ce1e2a"} Nov 27 19:00:41 crc kubenswrapper[4792]: I1127 19:00:41.577098 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/crc-debug-5wcq9" Nov 27 19:00:41 crc kubenswrapper[4792]: I1127 19:00:41.626204 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lmr6d/crc-debug-5wcq9"] Nov 27 19:00:41 crc kubenswrapper[4792]: I1127 19:00:41.645462 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lmr6d/crc-debug-5wcq9"] Nov 27 19:00:41 crc kubenswrapper[4792]: I1127 19:00:41.688054 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92gns\" (UniqueName: \"kubernetes.io/projected/1df8c29c-1eb7-4e9d-9712-322fda3b98c5-kube-api-access-92gns\") pod \"1df8c29c-1eb7-4e9d-9712-322fda3b98c5\" (UID: \"1df8c29c-1eb7-4e9d-9712-322fda3b98c5\") " Nov 27 19:00:41 crc kubenswrapper[4792]: I1127 19:00:41.688228 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1df8c29c-1eb7-4e9d-9712-322fda3b98c5-host\") pod \"1df8c29c-1eb7-4e9d-9712-322fda3b98c5\" (UID: \"1df8c29c-1eb7-4e9d-9712-322fda3b98c5\") " Nov 27 19:00:41 crc kubenswrapper[4792]: I1127 19:00:41.688305 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df8c29c-1eb7-4e9d-9712-322fda3b98c5-host" (OuterVolumeSpecName: "host") pod "1df8c29c-1eb7-4e9d-9712-322fda3b98c5" (UID: "1df8c29c-1eb7-4e9d-9712-322fda3b98c5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 19:00:41 crc kubenswrapper[4792]: I1127 19:00:41.689933 4792 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1df8c29c-1eb7-4e9d-9712-322fda3b98c5-host\") on node \"crc\" DevicePath \"\"" Nov 27 19:00:41 crc kubenswrapper[4792]: I1127 19:00:41.702055 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df8c29c-1eb7-4e9d-9712-322fda3b98c5-kube-api-access-92gns" (OuterVolumeSpecName: "kube-api-access-92gns") pod "1df8c29c-1eb7-4e9d-9712-322fda3b98c5" (UID: "1df8c29c-1eb7-4e9d-9712-322fda3b98c5"). InnerVolumeSpecName "kube-api-access-92gns". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 19:00:41 crc kubenswrapper[4792]: I1127 19:00:41.791784 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92gns\" (UniqueName: \"kubernetes.io/projected/1df8c29c-1eb7-4e9d-9712-322fda3b98c5-kube-api-access-92gns\") on node \"crc\" DevicePath \"\"" Nov 27 19:00:42 crc kubenswrapper[4792]: I1127 19:00:42.456938 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d31b5eb7d77a28fce7ccbf84045013cd745359008d717a45547ca1916a17cda6" Nov 27 19:00:42 crc kubenswrapper[4792]: I1127 19:00:42.457010 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/crc-debug-5wcq9" Nov 27 19:00:42 crc kubenswrapper[4792]: I1127 19:00:42.703989 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1df8c29c-1eb7-4e9d-9712-322fda3b98c5" path="/var/lib/kubelet/pods/1df8c29c-1eb7-4e9d-9712-322fda3b98c5/volumes" Nov 27 19:00:42 crc kubenswrapper[4792]: I1127 19:00:42.837480 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lmr6d/crc-debug-p974t"] Nov 27 19:00:42 crc kubenswrapper[4792]: E1127 19:00:42.838399 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1df8c29c-1eb7-4e9d-9712-322fda3b98c5" containerName="container-00" Nov 27 19:00:42 crc kubenswrapper[4792]: I1127 19:00:42.838417 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="1df8c29c-1eb7-4e9d-9712-322fda3b98c5" containerName="container-00" Nov 27 19:00:42 crc kubenswrapper[4792]: E1127 19:00:42.838437 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642b4f85-dac7-4c6f-b4b5-499d4759242c" containerName="collect-profiles" Nov 27 19:00:42 crc kubenswrapper[4792]: I1127 19:00:42.838446 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="642b4f85-dac7-4c6f-b4b5-499d4759242c" containerName="collect-profiles" Nov 27 19:00:42 crc kubenswrapper[4792]: I1127 19:00:42.838687 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="1df8c29c-1eb7-4e9d-9712-322fda3b98c5" containerName="container-00" Nov 27 19:00:42 crc kubenswrapper[4792]: I1127 19:00:42.838707 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="642b4f85-dac7-4c6f-b4b5-499d4759242c" containerName="collect-profiles" Nov 27 19:00:42 crc kubenswrapper[4792]: I1127 19:00:42.840855 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/crc-debug-p974t" Nov 27 19:00:43 crc kubenswrapper[4792]: I1127 19:00:43.023204 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfbpk\" (UniqueName: \"kubernetes.io/projected/6acfea06-f124-4ba6-9a7e-06be811bf35a-kube-api-access-bfbpk\") pod \"crc-debug-p974t\" (UID: \"6acfea06-f124-4ba6-9a7e-06be811bf35a\") " pod="openshift-must-gather-lmr6d/crc-debug-p974t" Nov 27 19:00:43 crc kubenswrapper[4792]: I1127 19:00:43.023893 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6acfea06-f124-4ba6-9a7e-06be811bf35a-host\") pod \"crc-debug-p974t\" (UID: \"6acfea06-f124-4ba6-9a7e-06be811bf35a\") " pod="openshift-must-gather-lmr6d/crc-debug-p974t" Nov 27 19:00:43 crc kubenswrapper[4792]: I1127 19:00:43.126302 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6acfea06-f124-4ba6-9a7e-06be811bf35a-host\") pod \"crc-debug-p974t\" (UID: \"6acfea06-f124-4ba6-9a7e-06be811bf35a\") " pod="openshift-must-gather-lmr6d/crc-debug-p974t" Nov 27 19:00:43 crc kubenswrapper[4792]: I1127 19:00:43.126415 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfbpk\" (UniqueName: \"kubernetes.io/projected/6acfea06-f124-4ba6-9a7e-06be811bf35a-kube-api-access-bfbpk\") pod \"crc-debug-p974t\" (UID: \"6acfea06-f124-4ba6-9a7e-06be811bf35a\") " pod="openshift-must-gather-lmr6d/crc-debug-p974t" Nov 27 19:00:43 crc kubenswrapper[4792]: I1127 19:00:43.126537 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6acfea06-f124-4ba6-9a7e-06be811bf35a-host\") pod \"crc-debug-p974t\" (UID: \"6acfea06-f124-4ba6-9a7e-06be811bf35a\") " pod="openshift-must-gather-lmr6d/crc-debug-p974t" Nov 27 19:00:43 crc kubenswrapper[4792]: I1127 19:00:43.152599 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfbpk\" (UniqueName: \"kubernetes.io/projected/6acfea06-f124-4ba6-9a7e-06be811bf35a-kube-api-access-bfbpk\") pod \"crc-debug-p974t\" (UID: \"6acfea06-f124-4ba6-9a7e-06be811bf35a\") " pod="openshift-must-gather-lmr6d/crc-debug-p974t" Nov 27 19:00:43 crc kubenswrapper[4792]: I1127 19:00:43.161782 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/crc-debug-p974t" Nov 27 19:00:43 crc kubenswrapper[4792]: W1127 19:00:43.195529 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6acfea06_f124_4ba6_9a7e_06be811bf35a.slice/crio-b155416f0c2aaa1951d03cbb2fe503a5de41f9cdb3acf96bd7109dad190a9f93 WatchSource:0}: Error finding container b155416f0c2aaa1951d03cbb2fe503a5de41f9cdb3acf96bd7109dad190a9f93: Status 404 returned error can't find the container with id b155416f0c2aaa1951d03cbb2fe503a5de41f9cdb3acf96bd7109dad190a9f93 Nov 27 19:00:43 crc kubenswrapper[4792]: I1127 19:00:43.473849 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmr6d/crc-debug-p974t" event={"ID":"6acfea06-f124-4ba6-9a7e-06be811bf35a","Type":"ContainerStarted","Data":"7d2e8863305edd7ea7008a0a671a6a10d95a090f8c77d114eae974e5ac28bcf6"} Nov 27 19:00:43 crc kubenswrapper[4792]: I1127 19:00:43.474228 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmr6d/crc-debug-p974t" event={"ID":"6acfea06-f124-4ba6-9a7e-06be811bf35a","Type":"ContainerStarted","Data":"b155416f0c2aaa1951d03cbb2fe503a5de41f9cdb3acf96bd7109dad190a9f93"} Nov 27 19:00:43 crc kubenswrapper[4792]: I1127 19:00:43.517351 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lmr6d/crc-debug-p974t" podStartSLOduration=1.517324973 podStartE2EDuration="1.517324973s" podCreationTimestamp="2025-11-27 19:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 19:00:43.48624281 +0000 UTC m=+6665.829069128" watchObservedRunningTime="2025-11-27 19:00:43.517324973 +0000 UTC m=+6665.860151331" Nov 27 19:00:44 crc kubenswrapper[4792]: I1127 19:00:44.487232 4792 generic.go:334] "Generic (PLEG): container finished" podID="6acfea06-f124-4ba6-9a7e-06be811bf35a" containerID="7d2e8863305edd7ea7008a0a671a6a10d95a090f8c77d114eae974e5ac28bcf6" exitCode=0 Nov 27 19:00:44 crc kubenswrapper[4792]: I1127 19:00:44.487380 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmr6d/crc-debug-p974t" event={"ID":"6acfea06-f124-4ba6-9a7e-06be811bf35a","Type":"ContainerDied","Data":"7d2e8863305edd7ea7008a0a671a6a10d95a090f8c77d114eae974e5ac28bcf6"} Nov 27 19:00:45 crc kubenswrapper[4792]: I1127 19:00:45.631201 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/crc-debug-p974t" Nov 27 19:00:45 crc kubenswrapper[4792]: I1127 19:00:45.778596 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6acfea06-f124-4ba6-9a7e-06be811bf35a-host\") pod \"6acfea06-f124-4ba6-9a7e-06be811bf35a\" (UID: \"6acfea06-f124-4ba6-9a7e-06be811bf35a\") " Nov 27 19:00:45 crc kubenswrapper[4792]: I1127 19:00:45.778669 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6acfea06-f124-4ba6-9a7e-06be811bf35a-host" (OuterVolumeSpecName: "host") pod "6acfea06-f124-4ba6-9a7e-06be811bf35a" (UID: "6acfea06-f124-4ba6-9a7e-06be811bf35a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 19:00:45 crc kubenswrapper[4792]: I1127 19:00:45.778770 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfbpk\" (UniqueName: \"kubernetes.io/projected/6acfea06-f124-4ba6-9a7e-06be811bf35a-kube-api-access-bfbpk\") pod \"6acfea06-f124-4ba6-9a7e-06be811bf35a\" (UID: \"6acfea06-f124-4ba6-9a7e-06be811bf35a\") " Nov 27 19:00:45 crc kubenswrapper[4792]: I1127 19:00:45.779593 4792 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6acfea06-f124-4ba6-9a7e-06be811bf35a-host\") on node \"crc\" DevicePath \"\"" Nov 27 19:00:45 crc kubenswrapper[4792]: I1127 19:00:45.787124 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acfea06-f124-4ba6-9a7e-06be811bf35a-kube-api-access-bfbpk" (OuterVolumeSpecName: "kube-api-access-bfbpk") pod "6acfea06-f124-4ba6-9a7e-06be811bf35a" (UID: "6acfea06-f124-4ba6-9a7e-06be811bf35a"). InnerVolumeSpecName "kube-api-access-bfbpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 19:00:45 crc kubenswrapper[4792]: I1127 19:00:45.881291 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfbpk\" (UniqueName: \"kubernetes.io/projected/6acfea06-f124-4ba6-9a7e-06be811bf35a-kube-api-access-bfbpk\") on node \"crc\" DevicePath \"\"" Nov 27 19:00:46 crc kubenswrapper[4792]: I1127 19:00:46.140976 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lmr6d/crc-debug-p974t"] Nov 27 19:00:46 crc kubenswrapper[4792]: I1127 19:00:46.154603 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lmr6d/crc-debug-p974t"] Nov 27 19:00:46 crc kubenswrapper[4792]: I1127 19:00:46.513247 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b155416f0c2aaa1951d03cbb2fe503a5de41f9cdb3acf96bd7109dad190a9f93" Nov 27 19:00:46 crc kubenswrapper[4792]: I1127 19:00:46.513352 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/crc-debug-p974t" Nov 27 19:00:46 crc kubenswrapper[4792]: I1127 19:00:46.703486 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6acfea06-f124-4ba6-9a7e-06be811bf35a" path="/var/lib/kubelet/pods/6acfea06-f124-4ba6-9a7e-06be811bf35a/volumes" Nov 27 19:00:47 crc kubenswrapper[4792]: I1127 19:00:47.352873 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lmr6d/crc-debug-5lqds"] Nov 27 19:00:47 crc kubenswrapper[4792]: E1127 19:00:47.353503 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6acfea06-f124-4ba6-9a7e-06be811bf35a" containerName="container-00" Nov 27 19:00:47 crc kubenswrapper[4792]: I1127 19:00:47.353528 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="6acfea06-f124-4ba6-9a7e-06be811bf35a" containerName="container-00" Nov 27 19:00:47 crc kubenswrapper[4792]: I1127 19:00:47.353887 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="6acfea06-f124-4ba6-9a7e-06be811bf35a" containerName="container-00" Nov 27 19:00:47 crc kubenswrapper[4792]: I1127 19:00:47.355143 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/crc-debug-5lqds" Nov 27 19:00:47 crc kubenswrapper[4792]: I1127 19:00:47.419686 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/899e0481-65ec-4b47-af43-408beef9a904-host\") pod \"crc-debug-5lqds\" (UID: \"899e0481-65ec-4b47-af43-408beef9a904\") " pod="openshift-must-gather-lmr6d/crc-debug-5lqds" Nov 27 19:00:47 crc kubenswrapper[4792]: I1127 19:00:47.419889 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzzrs\" (UniqueName: \"kubernetes.io/projected/899e0481-65ec-4b47-af43-408beef9a904-kube-api-access-mzzrs\") pod \"crc-debug-5lqds\" (UID: \"899e0481-65ec-4b47-af43-408beef9a904\") " pod="openshift-must-gather-lmr6d/crc-debug-5lqds" Nov 27 19:00:47 crc kubenswrapper[4792]: I1127 19:00:47.529544 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/899e0481-65ec-4b47-af43-408beef9a904-host\") pod \"crc-debug-5lqds\" (UID: \"899e0481-65ec-4b47-af43-408beef9a904\") " pod="openshift-must-gather-lmr6d/crc-debug-5lqds" Nov 27 19:00:47 crc kubenswrapper[4792]: I1127 19:00:47.529769 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzzrs\" (UniqueName: \"kubernetes.io/projected/899e0481-65ec-4b47-af43-408beef9a904-kube-api-access-mzzrs\") pod \"crc-debug-5lqds\" (UID: \"899e0481-65ec-4b47-af43-408beef9a904\") " pod="openshift-must-gather-lmr6d/crc-debug-5lqds" Nov 27 19:00:47 crc kubenswrapper[4792]: I1127 19:00:47.530347 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/899e0481-65ec-4b47-af43-408beef9a904-host\") pod \"crc-debug-5lqds\" (UID: \"899e0481-65ec-4b47-af43-408beef9a904\") " pod="openshift-must-gather-lmr6d/crc-debug-5lqds" Nov 27 19:00:47 crc kubenswrapper[4792]: I1127 19:00:47.562611 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzzrs\" (UniqueName: \"kubernetes.io/projected/899e0481-65ec-4b47-af43-408beef9a904-kube-api-access-mzzrs\") pod \"crc-debug-5lqds\" (UID: \"899e0481-65ec-4b47-af43-408beef9a904\") " pod="openshift-must-gather-lmr6d/crc-debug-5lqds" Nov 27 19:00:47 crc kubenswrapper[4792]: I1127 19:00:47.674902 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/crc-debug-5lqds" Nov 27 19:00:47 crc kubenswrapper[4792]: I1127 19:00:47.687160 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 19:00:47 crc kubenswrapper[4792]: W1127 19:00:47.716584 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod899e0481_65ec_4b47_af43_408beef9a904.slice/crio-84775a64917172524a98621b6a5a447f1582596586b57e1ac956b69f17f123d5 WatchSource:0}: Error finding container 84775a64917172524a98621b6a5a447f1582596586b57e1ac956b69f17f123d5: Status 404 returned error can't find the container with id 84775a64917172524a98621b6a5a447f1582596586b57e1ac956b69f17f123d5 Nov 27 19:00:48 crc kubenswrapper[4792]: I1127 19:00:48.560505 4792 generic.go:334] "Generic (PLEG): container finished" podID="899e0481-65ec-4b47-af43-408beef9a904" containerID="1e62cf4adb47b07b88fd88d09b504918395731b303a6646dfd9db25f8e97440f" exitCode=0 Nov 27 19:00:48 crc kubenswrapper[4792]: I1127 19:00:48.560570 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmr6d/crc-debug-5lqds" event={"ID":"899e0481-65ec-4b47-af43-408beef9a904","Type":"ContainerDied","Data":"1e62cf4adb47b07b88fd88d09b504918395731b303a6646dfd9db25f8e97440f"} Nov 27 19:00:48 crc kubenswrapper[4792]: I1127 19:00:48.560904 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmr6d/crc-debug-5lqds" event={"ID":"899e0481-65ec-4b47-af43-408beef9a904","Type":"ContainerStarted","Data":"84775a64917172524a98621b6a5a447f1582596586b57e1ac956b69f17f123d5"} Nov 27 19:00:48 crc kubenswrapper[4792]: I1127 19:00:48.564153 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"e7914bc2a1025b6f976e576c7aa0b7b22098e5a8a12f9f7496d8fdc76a25c346"} Nov 27 19:00:48 crc kubenswrapper[4792]: I1127 19:00:48.641704 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lmr6d/crc-debug-5lqds"] Nov 27 19:00:48 crc kubenswrapper[4792]: I1127 19:00:48.654770 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lmr6d/crc-debug-5lqds"] Nov 27 19:00:49 crc kubenswrapper[4792]: I1127 19:00:49.707600 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/crc-debug-5lqds" Nov 27 19:00:49 crc kubenswrapper[4792]: I1127 19:00:49.796770 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzzrs\" (UniqueName: \"kubernetes.io/projected/899e0481-65ec-4b47-af43-408beef9a904-kube-api-access-mzzrs\") pod \"899e0481-65ec-4b47-af43-408beef9a904\" (UID: \"899e0481-65ec-4b47-af43-408beef9a904\") " Nov 27 19:00:49 crc kubenswrapper[4792]: I1127 19:00:49.797026 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/899e0481-65ec-4b47-af43-408beef9a904-host\") pod \"899e0481-65ec-4b47-af43-408beef9a904\" (UID: \"899e0481-65ec-4b47-af43-408beef9a904\") " Nov 27 19:00:49 crc kubenswrapper[4792]: I1127 19:00:49.797228 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899e0481-65ec-4b47-af43-408beef9a904-host" (OuterVolumeSpecName: "host") pod "899e0481-65ec-4b47-af43-408beef9a904" (UID: "899e0481-65ec-4b47-af43-408beef9a904"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 27 19:00:49 crc kubenswrapper[4792]: I1127 19:00:49.797612 4792 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/899e0481-65ec-4b47-af43-408beef9a904-host\") on node \"crc\" DevicePath \"\"" Nov 27 19:00:49 crc kubenswrapper[4792]: I1127 19:00:49.802395 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899e0481-65ec-4b47-af43-408beef9a904-kube-api-access-mzzrs" (OuterVolumeSpecName: "kube-api-access-mzzrs") pod "899e0481-65ec-4b47-af43-408beef9a904" (UID: "899e0481-65ec-4b47-af43-408beef9a904"). InnerVolumeSpecName "kube-api-access-mzzrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 19:00:49 crc kubenswrapper[4792]: I1127 19:00:49.900308 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzzrs\" (UniqueName: \"kubernetes.io/projected/899e0481-65ec-4b47-af43-408beef9a904-kube-api-access-mzzrs\") on node \"crc\" DevicePath \"\"" Nov 27 19:00:50 crc kubenswrapper[4792]: I1127 19:00:50.584679 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/crc-debug-5lqds" Nov 27 19:00:50 crc kubenswrapper[4792]: I1127 19:00:50.584694 4792 scope.go:117] "RemoveContainer" containerID="1e62cf4adb47b07b88fd88d09b504918395731b303a6646dfd9db25f8e97440f" Nov 27 19:00:50 crc kubenswrapper[4792]: I1127 19:00:50.705207 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899e0481-65ec-4b47-af43-408beef9a904" path="/var/lib/kubelet/pods/899e0481-65ec-4b47-af43-408beef9a904/volumes" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.155496 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29404501-kjq24"] Nov 27 19:01:00 crc kubenswrapper[4792]: E1127 19:01:00.156685 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899e0481-65ec-4b47-af43-408beef9a904" containerName="container-00" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.156700 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="899e0481-65ec-4b47-af43-408beef9a904" containerName="container-00" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.156963 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="899e0481-65ec-4b47-af43-408beef9a904" containerName="container-00" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.157774 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.170613 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29404501-kjq24"] Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.237260 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-combined-ca-bundle\") pod \"keystone-cron-29404501-kjq24\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.237428 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-config-data\") pod \"keystone-cron-29404501-kjq24\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.237450 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-fernet-keys\") pod \"keystone-cron-29404501-kjq24\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.237483 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nw68\" (UniqueName: \"kubernetes.io/projected/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-kube-api-access-2nw68\") pod \"keystone-cron-29404501-kjq24\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.339574 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-config-data\") pod \"keystone-cron-29404501-kjq24\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.339672 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-fernet-keys\") pod \"keystone-cron-29404501-kjq24\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.339753 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nw68\" (UniqueName: \"kubernetes.io/projected/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-kube-api-access-2nw68\") pod \"keystone-cron-29404501-kjq24\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.339891 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-combined-ca-bundle\") pod \"keystone-cron-29404501-kjq24\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.346341 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-combined-ca-bundle\") pod \"keystone-cron-29404501-kjq24\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.346838 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-fernet-keys\") pod \"keystone-cron-29404501-kjq24\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.346940 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-config-data\") pod \"keystone-cron-29404501-kjq24\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.355970 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nw68\" (UniqueName: \"kubernetes.io/projected/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-kube-api-access-2nw68\") pod \"keystone-cron-29404501-kjq24\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:00 crc kubenswrapper[4792]: I1127 19:01:00.524862 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:01 crc kubenswrapper[4792]: W1127 19:01:01.182262 4792 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d7fb1f4_b8d4_4628_9c34_4f8bb926b66c.slice/crio-eff4af2b32cfa5bea01b23b0e7ca817d9fdf96ddc3cdffebfb8bbcbeeef157f7 WatchSource:0}: Error finding container eff4af2b32cfa5bea01b23b0e7ca817d9fdf96ddc3cdffebfb8bbcbeeef157f7: Status 404 returned error can't find the container with id eff4af2b32cfa5bea01b23b0e7ca817d9fdf96ddc3cdffebfb8bbcbeeef157f7 Nov 27 19:01:01 crc kubenswrapper[4792]: I1127 19:01:01.183090 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29404501-kjq24"] Nov 27 19:01:01 crc kubenswrapper[4792]: I1127 19:01:01.716902 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29404501-kjq24" event={"ID":"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c","Type":"ContainerStarted","Data":"ed66d4c80fc0a400bf2e967eda672acd1c99e6e7ee31b0888f7a87de044c3366"} Nov 27 19:01:01 crc kubenswrapper[4792]: I1127 19:01:01.717210 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29404501-kjq24" event={"ID":"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c","Type":"ContainerStarted","Data":"eff4af2b32cfa5bea01b23b0e7ca817d9fdf96ddc3cdffebfb8bbcbeeef157f7"} Nov 27 19:01:01 crc kubenswrapper[4792]: I1127 19:01:01.737965 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29404501-kjq24" podStartSLOduration=1.737944949 podStartE2EDuration="1.737944949s" podCreationTimestamp="2025-11-27 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-27 19:01:01.735308394 +0000 UTC m=+6684.078134712" watchObservedRunningTime="2025-11-27 19:01:01.737944949 +0000 UTC m=+6684.080771267" Nov 27 19:01:05 crc kubenswrapper[4792]: I1127 19:01:05.805926 4792 generic.go:334] "Generic (PLEG): container finished" podID="5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c" containerID="ed66d4c80fc0a400bf2e967eda672acd1c99e6e7ee31b0888f7a87de044c3366" exitCode=0 Nov 27 19:01:05 crc kubenswrapper[4792]: I1127 19:01:05.806035 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29404501-kjq24" event={"ID":"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c","Type":"ContainerDied","Data":"ed66d4c80fc0a400bf2e967eda672acd1c99e6e7ee31b0888f7a87de044c3366"} Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.290386 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.437777 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nw68\" (UniqueName: \"kubernetes.io/projected/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-kube-api-access-2nw68\") pod \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.437902 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-fernet-keys\") pod \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.437951 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-config-data\") pod \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.438161 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-combined-ca-bundle\") pod \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\" (UID: \"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c\") " Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.445450 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-kube-api-access-2nw68" (OuterVolumeSpecName: "kube-api-access-2nw68") pod "5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c" (UID: "5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c"). InnerVolumeSpecName "kube-api-access-2nw68". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.445960 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c" (UID: "5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.476902 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c" (UID: "5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.507760 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-config-data" (OuterVolumeSpecName: "config-data") pod "5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c" (UID: "5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.541138 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nw68\" (UniqueName: \"kubernetes.io/projected/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-kube-api-access-2nw68\") on node \"crc\" DevicePath \"\"" Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.541175 4792 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.541190 4792 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-config-data\") on node \"crc\" DevicePath \"\"" Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.541201 4792 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.828093 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29404501-kjq24" event={"ID":"5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c","Type":"ContainerDied","Data":"eff4af2b32cfa5bea01b23b0e7ca817d9fdf96ddc3cdffebfb8bbcbeeef157f7"} Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.828134 4792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eff4af2b32cfa5bea01b23b0e7ca817d9fdf96ddc3cdffebfb8bbcbeeef157f7" Nov 27 19:01:07 crc kubenswrapper[4792]: I1127 19:01:07.828190 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29404501-kjq24" Nov 27 19:01:24 crc kubenswrapper[4792]: I1127 19:01:24.838989 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b5e40c25-6ce7-4631-9877-a7c983c966f7/aodh-api/0.log" Nov 27 19:01:24 crc kubenswrapper[4792]: I1127 19:01:24.983074 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b5e40c25-6ce7-4631-9877-a7c983c966f7/aodh-evaluator/0.log" Nov 27 19:01:25 crc kubenswrapper[4792]: I1127 19:01:25.016878 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b5e40c25-6ce7-4631-9877-a7c983c966f7/aodh-notifier/0.log" Nov 27 19:01:25 crc kubenswrapper[4792]: I1127 19:01:25.035919 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b5e40c25-6ce7-4631-9877-a7c983c966f7/aodh-listener/0.log" Nov 27 19:01:25 crc kubenswrapper[4792]: I1127 19:01:25.208197 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-65f9f97c5d-544l8_e0a4f95d-c1db-43ea-9d79-185c188a4f9b/barbican-api/0.log" Nov 27 19:01:25 crc kubenswrapper[4792]: I1127 19:01:25.251574 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-65f9f97c5d-544l8_e0a4f95d-c1db-43ea-9d79-185c188a4f9b/barbican-api-log/0.log" Nov 27 19:01:25 crc kubenswrapper[4792]: I1127 19:01:25.375999 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6ff478798d-8s6ww_a8029b26-0d9c-428e-af30-62c262f079f4/barbican-keystone-listener/0.log" Nov 27 19:01:25 crc kubenswrapper[4792]: I1127 19:01:25.590533 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b9854ff99-9l5qq_64269486-bbcb-49d2-ab84-0591965b9277/barbican-worker/0.log" Nov 27 19:01:25 crc kubenswrapper[4792]: I1127 19:01:25.595147 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6ff478798d-8s6ww_a8029b26-0d9c-428e-af30-62c262f079f4/barbican-keystone-listener-log/0.log" Nov 27 19:01:25 crc kubenswrapper[4792]: I1127 19:01:25.640854 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-b9854ff99-9l5qq_64269486-bbcb-49d2-ab84-0591965b9277/barbican-worker-log/0.log" Nov 27 19:01:26 crc kubenswrapper[4792]: I1127 19:01:26.036463 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-qqr9g_cfb67295-f5ab-48cb-acae-25420d9d77f4/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:26 crc kubenswrapper[4792]: I1127 19:01:26.220756 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e4f18974-fb5b-4bb2-906b-9f17d1297b04/ceilometer-central-agent/0.log" Nov 27 19:01:26 crc kubenswrapper[4792]: I1127 19:01:26.295914 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e4f18974-fb5b-4bb2-906b-9f17d1297b04/ceilometer-notification-agent/0.log" Nov 27 19:01:26 crc kubenswrapper[4792]: I1127 19:01:26.513750 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e4f18974-fb5b-4bb2-906b-9f17d1297b04/proxy-httpd/0.log" Nov 27 19:01:26 crc kubenswrapper[4792]: I1127 19:01:26.622597 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e4f18974-fb5b-4bb2-906b-9f17d1297b04/sg-core/0.log" Nov 27 19:01:26 crc kubenswrapper[4792]: I1127 19:01:26.773607 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bbc2fff7-567e-4a6d-918a-7f6f430486c1/cinder-api-log/0.log" Nov 27 19:01:26 crc kubenswrapper[4792]: I1127 19:01:26.870113 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bbc2fff7-567e-4a6d-918a-7f6f430486c1/cinder-api/0.log" Nov 27 19:01:26 crc kubenswrapper[4792]: I1127 19:01:26.978936 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_301f71fa-43fd-4005-a753-5127a2e7df97/cinder-scheduler/0.log" Nov 27 19:01:27 crc kubenswrapper[4792]: I1127 19:01:27.068841 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_301f71fa-43fd-4005-a753-5127a2e7df97/probe/0.log" Nov 27 19:01:27 crc kubenswrapper[4792]: I1127 19:01:27.230009 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-gx9pq_941f3fd2-382e-4dc2-94f4-39df69607cee/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:27 crc kubenswrapper[4792]: I1127 19:01:27.346133 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-cnk4r_97d6e3f1-b04e-4f38-b104-3f74f8ed4683/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:27 crc kubenswrapper[4792]: I1127 19:01:27.463086 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-76jbs_ec33e14b-5586-4b5e-a807-396841a63250/init/0.log" Nov 27 19:01:27 crc kubenswrapper[4792]: I1127 19:01:27.736293 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-76jbs_ec33e14b-5586-4b5e-a807-396841a63250/init/0.log" Nov 27 19:01:27 crc kubenswrapper[4792]: I1127 19:01:27.760529 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-87lxc_2b8542bf-b789-4e1a-9ff9-5375dc57cc94/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:27 crc kubenswrapper[4792]: I1127 19:01:27.864458 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-76jbs_ec33e14b-5586-4b5e-a807-396841a63250/dnsmasq-dns/0.log" Nov 27 19:01:28 crc kubenswrapper[4792]: I1127 19:01:28.084772 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4657ccb2-3806-41d0-932d-195b809345fd/glance-httpd/0.log" Nov 27 19:01:28 crc kubenswrapper[4792]: I1127 19:01:28.199045 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4657ccb2-3806-41d0-932d-195b809345fd/glance-log/0.log" Nov 27 19:01:28 crc kubenswrapper[4792]: I1127 19:01:28.401877 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d64731cc-a4fe-498e-9553-4f7f5fce34a2/glance-httpd/0.log" Nov 27 19:01:28 crc kubenswrapper[4792]: I1127 19:01:28.437971 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d64731cc-a4fe-498e-9553-4f7f5fce34a2/glance-log/0.log" Nov 27 19:01:29 crc kubenswrapper[4792]: I1127 19:01:29.018708 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-695947b5db-q2kr8_9603abd5-f9a5-4ace-9d0f-652992d6de1e/heat-engine/0.log" Nov 27 19:01:29 crc kubenswrapper[4792]: I1127 19:01:29.060879 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-jvdcj_95dfa4b6-84cb-439b-a9ff-fbe5b048973e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:29 crc kubenswrapper[4792]: I1127 19:01:29.560086 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-nv2q9_6dbb090d-6543-4de1-80f3-1a61798d7870/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:29 crc kubenswrapper[4792]: I1127 19:01:29.583447 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-f54bc59f4-fb7f4_e7c59788-726a-4159-91a6-766cad09ff7d/heat-api/0.log" Nov 27 19:01:29 crc kubenswrapper[4792]: I1127 19:01:29.585065 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6cc9cdbfc-zr59q_02e83c23-359e-428f-acab-41d6912a84ab/heat-cfnapi/0.log" Nov 27 19:01:29 crc kubenswrapper[4792]: I1127 19:01:29.849298 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29404441-l4szq_eea31a88-b7d7-4537-bd17-1a9edcaee2d9/keystone-cron/0.log" Nov 27 19:01:29 crc kubenswrapper[4792]: I1127 19:01:29.993982 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29404501-kjq24_5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c/keystone-cron/0.log" Nov 27 19:01:30 crc kubenswrapper[4792]: I1127 19:01:30.125541 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_dfcff168-fa89-462b-a1e2-8422c13e0ab3/kube-state-metrics/0.log" Nov 27 19:01:30 crc kubenswrapper[4792]: I1127 19:01:30.344114 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-c92t9_c1228795-b08e-4f02-ac5c-a9bc71058d23/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:30 crc kubenswrapper[4792]: I1127 19:01:30.375545 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6667648786-v844v_f6f2c587-9e8d-49dc-a8de-a74e79b8ddb7/keystone-api/0.log" Nov 27 19:01:30 crc kubenswrapper[4792]: I1127 19:01:30.455525 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-pjp2g_caa13c46-5c39-46bb-a2bb-cfa46caae2b4/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:30 crc kubenswrapper[4792]: I1127 19:01:30.649110 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_0248f4d6-3146-4bb3-85d8-03cdfb42238a/mysqld-exporter/0.log" Nov 27 19:01:31 crc kubenswrapper[4792]: I1127 19:01:31.095514 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fff8f565-9t8rn_1972fb8a-2570-4dd8-8ae1-b3fccf229e4b/neutron-httpd/0.log" Nov 27 19:01:31 crc kubenswrapper[4792]: I1127 19:01:31.097016 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ltrmk_87368e9c-b9b2-499a-9825-de4ff047aabd/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:31 crc kubenswrapper[4792]: I1127 19:01:31.100095 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fff8f565-9t8rn_1972fb8a-2570-4dd8-8ae1-b3fccf229e4b/neutron-api/0.log" Nov 27 19:01:31 crc kubenswrapper[4792]: I1127 19:01:31.798922 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_e07ae62e-2f3c-4122-aba4-fbe7aaf16ff9/nova-cell0-conductor-conductor/0.log" Nov 27 19:01:32 crc kubenswrapper[4792]: I1127 19:01:32.106858 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3c9b4c85-0700-45e9-b663-ca02ecf5009d/nova-cell1-conductor-conductor/0.log" Nov 27 19:01:32 crc kubenswrapper[4792]: I1127 19:01:32.160558 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7cd1499d-a3bb-449a-85d6-fcb81e3b43ee/nova-api-log/0.log" Nov 27 19:01:32 crc kubenswrapper[4792]: I1127 19:01:32.476183 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0ccf4ccd-7b7b-4dbb-bc4f-55630569c4dd/nova-cell1-novncproxy-novncproxy/0.log" Nov 27 19:01:32 crc kubenswrapper[4792]: I1127 19:01:32.543981 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-fqdvn_83d3f635-5c64-4827-a54d-1b21ca1b6570/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:32 crc kubenswrapper[4792]: I1127 19:01:32.783972 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_76bd9753-9395-4ae1-a0c5-10c1ee3f0347/nova-metadata-log/0.log" Nov 27 19:01:32 crc kubenswrapper[4792]: I1127 19:01:32.959372 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7cd1499d-a3bb-449a-85d6-fcb81e3b43ee/nova-api-api/0.log" Nov 27 19:01:33 crc kubenswrapper[4792]: I1127 19:01:33.421584 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b/mysql-bootstrap/0.log" Nov 27 19:01:33 crc kubenswrapper[4792]: I1127 19:01:33.513234 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_09e9c959-b479-4008-8042-ffa78bb38460/nova-scheduler-scheduler/0.log" Nov 27 19:01:33 crc kubenswrapper[4792]: I1127 19:01:33.621818 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b/mysql-bootstrap/0.log" Nov 27 19:01:33 crc kubenswrapper[4792]: I1127 19:01:33.652674 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_1d97d9a8-d646-4cd4-99a5-e0e5f5976f5b/galera/0.log" Nov 27 19:01:33 crc kubenswrapper[4792]: I1127 19:01:33.929421 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8ed6358b-2030-436d-a847-724a53f802ea/mysql-bootstrap/0.log" Nov 27 19:01:34 crc kubenswrapper[4792]: I1127 19:01:34.069400 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8ed6358b-2030-436d-a847-724a53f802ea/mysql-bootstrap/0.log" Nov 27 19:01:34 crc kubenswrapper[4792]: I1127 19:01:34.140227 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_8ed6358b-2030-436d-a847-724a53f802ea/galera/0.log" Nov 27 19:01:34 crc kubenswrapper[4792]: I1127 19:01:34.300169 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_e88dd573-027f-458e-81ed-c133e141afb6/openstackclient/0.log" Nov 27 19:01:34 crc kubenswrapper[4792]: I1127 19:01:34.449825 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-z7zwq_88e47e4b-d7fb-4dfc-8352-9705403282a6/openstack-network-exporter/0.log" Nov 27 19:01:34 crc kubenswrapper[4792]: I1127 19:01:34.628520 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-n2t2x_8445903e-bdf0-4581-a2ce-728410f878ac/ovn-controller/0.log" Nov 27 19:01:34 crc kubenswrapper[4792]: I1127 19:01:34.763775 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dzgvb_a82ac7ae-1443-4fbc-a8bb-2383c148b809/ovsdb-server-init/0.log" Nov 27 19:01:35 crc kubenswrapper[4792]: I1127 19:01:35.056392 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dzgvb_a82ac7ae-1443-4fbc-a8bb-2383c148b809/ovsdb-server/0.log" Nov 27 19:01:35 crc kubenswrapper[4792]: I1127 19:01:35.086045 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dzgvb_a82ac7ae-1443-4fbc-a8bb-2383c148b809/ovs-vswitchd/0.log" Nov 27 19:01:35 crc kubenswrapper[4792]: I1127 19:01:35.098367 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dzgvb_a82ac7ae-1443-4fbc-a8bb-2383c148b809/ovsdb-server-init/0.log" Nov 27 19:01:35 crc kubenswrapper[4792]: I1127 19:01:35.393448 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-jn9hq_a8b213a4-d6e2-4ed9-b67b-625fab313079/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:35 crc kubenswrapper[4792]: I1127 19:01:35.509751 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_446f7473-cc3f-42b6-931c-eb1747df2c73/openstack-network-exporter/0.log" Nov 27 19:01:35 crc kubenswrapper[4792]: I1127 19:01:35.639862 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_446f7473-cc3f-42b6-931c-eb1747df2c73/ovn-northd/0.log" Nov 27 19:01:35 crc kubenswrapper[4792]: I1127 19:01:35.745753 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3b7e5347-cd23-498f-ac14-95ce8f106b97/openstack-network-exporter/0.log" Nov 27 19:01:35 crc kubenswrapper[4792]: I1127 19:01:35.771408 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_76bd9753-9395-4ae1-a0c5-10c1ee3f0347/nova-metadata-metadata/0.log" Nov 27 19:01:35 crc kubenswrapper[4792]: I1127 19:01:35.853437 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3b7e5347-cd23-498f-ac14-95ce8f106b97/ovsdbserver-nb/0.log" Nov 27 19:01:35 crc kubenswrapper[4792]: I1127 19:01:35.993052 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5/openstack-network-exporter/0.log" Nov 27 19:01:36 crc kubenswrapper[4792]: I1127 19:01:36.017047 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4708b49f-1bdd-4ef7-8b3e-9d572f4a2cb5/ovsdbserver-sb/0.log" Nov 27 19:01:36 crc kubenswrapper[4792]: I1127 19:01:36.327811 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4ad7f090-9b35-4a85-86d5-1763f234a768/init-config-reloader/0.log" Nov 27 19:01:36 crc kubenswrapper[4792]: I1127 19:01:36.420070 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-68b5cd97dd-hfxs2_26b3eb4f-347e-4da5-8da9-56f7620f43a8/placement-log/0.log" Nov 27 19:01:36 crc kubenswrapper[4792]: I1127 19:01:36.432362 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-68b5cd97dd-hfxs2_26b3eb4f-347e-4da5-8da9-56f7620f43a8/placement-api/0.log" Nov 27 19:01:36 crc kubenswrapper[4792]: I1127 19:01:36.809492 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4ad7f090-9b35-4a85-86d5-1763f234a768/init-config-reloader/0.log" Nov 27 19:01:36 crc kubenswrapper[4792]: I1127 19:01:36.872424 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4ad7f090-9b35-4a85-86d5-1763f234a768/thanos-sidecar/0.log" Nov 27 19:01:36 crc kubenswrapper[4792]: I1127 19:01:36.883837 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4ad7f090-9b35-4a85-86d5-1763f234a768/prometheus/0.log" Nov 27 19:01:36 crc kubenswrapper[4792]: I1127 19:01:36.884234 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4ad7f090-9b35-4a85-86d5-1763f234a768/config-reloader/0.log" Nov 27 19:01:37 crc kubenswrapper[4792]: I1127 19:01:37.124443 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_73468e89-af69-44aa-bc4d-66c7e34a8dff/setup-container/0.log" Nov 27 19:01:37 crc kubenswrapper[4792]: I1127 19:01:37.349805 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_73468e89-af69-44aa-bc4d-66c7e34a8dff/rabbitmq/0.log" Nov 27 19:01:37 crc kubenswrapper[4792]: I1127 19:01:37.358987 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_73468e89-af69-44aa-bc4d-66c7e34a8dff/setup-container/0.log" Nov 27 19:01:37 crc kubenswrapper[4792]: I1127 19:01:37.402395 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6d2993a9-7994-4249-bfd1-acc7b734eb16/setup-container/0.log" Nov 27 19:01:37 crc kubenswrapper[4792]: I1127 19:01:37.671451 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5r9tz_20b3860e-a914-42cd-b2e7-35ab54507a89/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:37 crc kubenswrapper[4792]: I1127 19:01:37.681947 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6d2993a9-7994-4249-bfd1-acc7b734eb16/setup-container/0.log" Nov 27 19:01:37 crc kubenswrapper[4792]: I1127 19:01:37.780790 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_6d2993a9-7994-4249-bfd1-acc7b734eb16/rabbitmq/0.log" Nov 27 19:01:37 crc kubenswrapper[4792]: I1127 19:01:37.925183 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-mdb54_70c3419b-b42e-42f5-be83-4de5d0e38566/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:38 crc kubenswrapper[4792]: I1127 19:01:38.004456 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-glhsj_8de61141-d67f-4491-ade9-57da76c018e7/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:38 crc kubenswrapper[4792]: I1127 19:01:38.232995 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-28hzm_1c6f6f25-0120-4355-9803-5e7b6743588b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:38 crc kubenswrapper[4792]: I1127 19:01:38.234435 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-hjr4g_72c0a753-4805-42a5-9b41-4fc97aad561b/ssh-known-hosts-edpm-deployment/0.log" Nov 27 19:01:38 crc kubenswrapper[4792]: I1127 19:01:38.544308 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55455cb8cf-gtjxc_9ace987a-3f62-48ce-8c4b-b9c50cd2a29e/proxy-server/0.log" Nov 27 19:01:38 crc kubenswrapper[4792]: I1127 19:01:38.782715 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55455cb8cf-gtjxc_9ace987a-3f62-48ce-8c4b-b9c50cd2a29e/proxy-httpd/0.log" Nov 27 19:01:38 crc kubenswrapper[4792]: I1127 19:01:38.823025 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-2n56v_66a7953b-06d4-453f-801c-4873d0d43c7a/swift-ring-rebalance/0.log" Nov 27 19:01:38 crc kubenswrapper[4792]: I1127 19:01:38.878863 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/account-auditor/0.log" Nov 27 19:01:38 crc kubenswrapper[4792]: I1127 19:01:38.974137 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/account-reaper/0.log" Nov 27 19:01:39 crc kubenswrapper[4792]: I1127 19:01:39.105374 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/container-auditor/0.log" Nov 27 19:01:39 crc kubenswrapper[4792]: I1127 19:01:39.105667 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/account-replicator/0.log" Nov 27 19:01:39 crc kubenswrapper[4792]: I1127 19:01:39.139431 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/account-server/0.log" Nov 27 19:01:39 crc kubenswrapper[4792]: I1127 19:01:39.252583 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/container-replicator/0.log" Nov 27 19:01:39 crc kubenswrapper[4792]: I1127 19:01:39.327623 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/container-server/0.log" Nov 27 19:01:39 crc kubenswrapper[4792]: I1127 19:01:39.394556 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/container-updater/0.log" Nov 27 19:01:39 crc kubenswrapper[4792]: I1127 19:01:39.429011 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/object-auditor/0.log" Nov 27 19:01:39 crc kubenswrapper[4792]: I1127 19:01:39.518984 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/object-expirer/0.log" Nov 27 19:01:39 crc kubenswrapper[4792]: I1127 19:01:39.576173 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/object-replicator/0.log" Nov 27 19:01:39 crc kubenswrapper[4792]: I1127 19:01:39.593210 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/object-server/0.log" Nov 27 19:01:39 crc kubenswrapper[4792]: I1127 19:01:39.669874 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/object-updater/0.log" Nov 27 19:01:39 crc kubenswrapper[4792]: I1127 19:01:39.759332 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/rsync/0.log" Nov 27 19:01:39 crc kubenswrapper[4792]: I1127 19:01:39.841025 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b7d63be6-0f2b-4b86-abec-4576d23792a9/swift-recon-cron/0.log" Nov 27 19:01:39 crc kubenswrapper[4792]: I1127 19:01:39.905701 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-5kg2z_8bfd070a-8c21-4c11-b794-c5410285a701/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:40 crc kubenswrapper[4792]: I1127 19:01:40.173850 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-tsstl_78842a98-31e3-4f0b-8f35-6b8a1856a994/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:40 crc kubenswrapper[4792]: I1127 19:01:40.418826 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_858cefc2-c01c-428d-852b-f3599dda658b/test-operator-logs-container/0.log" Nov 27 19:01:40 crc kubenswrapper[4792]: I1127 19:01:40.558916 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9thzs_4c243ad9-2b01-41fe-8efd-fb9bf4bef2c9/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 27 19:01:41 crc kubenswrapper[4792]: I1127 19:01:41.350461 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_57348a1d-d6f9-4844-894d-b837afec3bdc/tempest-tests-tempest-tests-runner/0.log" Nov 27 19:01:47 crc kubenswrapper[4792]: I1127 19:01:47.483159 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_a4f2370d-a8cf-4b58-8cdb-2fcd03c5f666/memcached/0.log" Nov 27 19:02:08 crc kubenswrapper[4792]: I1127 19:02:08.781556 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k_818171bc-2f19-4297-92b9-a01e361b6387/util/0.log" Nov 27 19:02:09 crc kubenswrapper[4792]: I1127 19:02:09.011199 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k_818171bc-2f19-4297-92b9-a01e361b6387/pull/0.log" Nov 27 19:02:09 crc kubenswrapper[4792]: I1127 19:02:09.035605 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k_818171bc-2f19-4297-92b9-a01e361b6387/util/0.log" Nov 27 19:02:09 crc kubenswrapper[4792]: I1127 19:02:09.039166 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k_818171bc-2f19-4297-92b9-a01e361b6387/pull/0.log" Nov 27 19:02:09 crc kubenswrapper[4792]: I1127 19:02:09.245475 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k_818171bc-2f19-4297-92b9-a01e361b6387/extract/0.log" Nov 27 19:02:09 crc kubenswrapper[4792]: I1127 19:02:09.245754 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k_818171bc-2f19-4297-92b9-a01e361b6387/pull/0.log" Nov 27 19:02:09 crc kubenswrapper[4792]: I1127 19:02:09.286870 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_6f4c4c4addfe6a8c39cfe1d8e8f2248616a8d53b76fdcf42ead70ee287scf5k_818171bc-2f19-4297-92b9-a01e361b6387/util/0.log" Nov 27 19:02:09 crc kubenswrapper[4792]: I1127 19:02:09.428213 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-8njms_880e84df-6b95-4c8d-8b4c-146f26d99098/kube-rbac-proxy/0.log" Nov 27 19:02:09 crc kubenswrapper[4792]: I1127 19:02:09.481703 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-8njms_880e84df-6b95-4c8d-8b4c-146f26d99098/manager/0.log" Nov 27 19:02:09 crc kubenswrapper[4792]: I1127 19:02:09.521882 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-z7fm9_db57e7fa-0523-4a09-91a0-371fe08e5052/kube-rbac-proxy/0.log" Nov 27 19:02:09 crc kubenswrapper[4792]: I1127 19:02:09.673227 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-z7fm9_db57e7fa-0523-4a09-91a0-371fe08e5052/manager/0.log" Nov 27 19:02:09 crc kubenswrapper[4792]: I1127 19:02:09.706653 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-xwttv_f6b3f84c-a691-43dc-b9bc-f5bd2fcafb79/kube-rbac-proxy/0.log" Nov 27 19:02:09 crc kubenswrapper[4792]: I1127 19:02:09.722761 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-xwttv_f6b3f84c-a691-43dc-b9bc-f5bd2fcafb79/manager/0.log" Nov 27 19:02:09 crc kubenswrapper[4792]: I1127 19:02:09.865106 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-hkks9_ad88e4ad-7c33-4dac-85ed-54e7f69d8625/kube-rbac-proxy/0.log" Nov 27 19:02:09 crc kubenswrapper[4792]: I1127 19:02:09.955216 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-hkks9_ad88e4ad-7c33-4dac-85ed-54e7f69d8625/manager/0.log" Nov 27 19:02:10 crc kubenswrapper[4792]: I1127 19:02:10.072506 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-gqmrh_04aba733-246c-4169-b91d-c7708aea6a71/kube-rbac-proxy/0.log" Nov 27 19:02:10 crc kubenswrapper[4792]: I1127 19:02:10.165284 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-qs7wq_d29cd75e-9782-4f90-b9cf-95329e101cbb/kube-rbac-proxy/0.log" Nov 27 19:02:10 crc kubenswrapper[4792]: I1127 19:02:10.238181 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-gqmrh_04aba733-246c-4169-b91d-c7708aea6a71/manager/0.log" Nov 27 19:02:10 crc kubenswrapper[4792]: I1127 19:02:10.290857 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-qs7wq_d29cd75e-9782-4f90-b9cf-95329e101cbb/manager/0.log" Nov 27 19:02:10 crc kubenswrapper[4792]: I1127 19:02:10.411468 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-zklgd_94d0c824-194b-4d52-ba80-1cc08301a196/kube-rbac-proxy/0.log" Nov 27 19:02:10 crc kubenswrapper[4792]: I1127 19:02:10.603891 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-zklgd_94d0c824-194b-4d52-ba80-1cc08301a196/manager/0.log" Nov 27 19:02:10 crc kubenswrapper[4792]: I1127 19:02:10.676771 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-7gmv4_5e49917e-d729-4661-a604-a603f9a8cca7/manager/0.log" Nov 27 19:02:10 crc kubenswrapper[4792]: I1127 19:02:10.727751 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-7gmv4_5e49917e-d729-4661-a604-a603f9a8cca7/kube-rbac-proxy/0.log" Nov 27 19:02:10 crc kubenswrapper[4792]: I1127 19:02:10.861361 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-v7kfm_fd4b3618-80a1-4d23-8faa-57c206b08cf6/kube-rbac-proxy/0.log" Nov 27 19:02:10 crc kubenswrapper[4792]: I1127 19:02:10.993361 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-v7kfm_fd4b3618-80a1-4d23-8faa-57c206b08cf6/manager/0.log" Nov 27 19:02:11 crc kubenswrapper[4792]: I1127 19:02:11.093826 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-dnsbx_652cb29e-91a9-433f-9002-c850a78cb8a4/kube-rbac-proxy/0.log" Nov 27 19:02:11 crc kubenswrapper[4792]: I1127 19:02:11.108714 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-dnsbx_652cb29e-91a9-433f-9002-c850a78cb8a4/manager/0.log" Nov 27 19:02:11 crc kubenswrapper[4792]: I1127 19:02:11.227398 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-lp4kf_afd4a5dc-d971-4eeb-8272-0ead1e9b4274/kube-rbac-proxy/0.log" Nov 27 19:02:11 crc kubenswrapper[4792]: I1127 19:02:11.304489 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-lp4kf_afd4a5dc-d971-4eeb-8272-0ead1e9b4274/manager/0.log" Nov 27 19:02:11 crc kubenswrapper[4792]: I1127 19:02:11.414121 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-bmtbl_20dd117f-6517-4b59-855d-a0f9d08409a2/kube-rbac-proxy/0.log" Nov 27 19:02:11 crc kubenswrapper[4792]: I1127 19:02:11.473767 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-bmtbl_20dd117f-6517-4b59-855d-a0f9d08409a2/manager/0.log" Nov 27 19:02:11 crc kubenswrapper[4792]: I1127 19:02:11.563866 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-2lfqp_2fc2a1fd-6c7e-4d26-801b-5cac891fba51/kube-rbac-proxy/0.log" Nov 27 19:02:11 crc kubenswrapper[4792]: I1127 19:02:11.693282 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-2lfqp_2fc2a1fd-6c7e-4d26-801b-5cac891fba51/manager/0.log" Nov 27 19:02:11 crc kubenswrapper[4792]: I1127 19:02:11.774086 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-9qhqv_969e1197-2aaa-42c9-b56e-7af3ef24e205/kube-rbac-proxy/0.log" Nov 27 19:02:11 crc kubenswrapper[4792]: I1127 19:02:11.774393 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-9qhqv_969e1197-2aaa-42c9-b56e-7af3ef24e205/manager/0.log" Nov 27 19:02:11 crc kubenswrapper[4792]: I1127 19:02:11.935835 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2_55002036-a4a7-469c-93be-e4483f455a4c/kube-rbac-proxy/0.log" Nov 27 19:02:11 crc kubenswrapper[4792]: I1127 19:02:11.964734 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bjm2w2_55002036-a4a7-469c-93be-e4483f455a4c/manager/0.log" Nov 27 19:02:12 crc kubenswrapper[4792]: I1127 19:02:12.332556 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-b44dff85c-lpx9d_d8282ec7-1375-403d-b679-d7e372e07f6f/operator/0.log" Nov 27 19:02:12 crc kubenswrapper[4792]: I1127 19:02:12.413814 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-n6smr_ad9fe5d7-1539-4597-b2b7-5fc5cf555264/registry-server/0.log" Nov 27 19:02:12 crc kubenswrapper[4792]: I1127 19:02:12.549785 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-z5rhr_57052bd6-e7c2-4ea0-bc6e-839ed4803541/kube-rbac-proxy/0.log" Nov 27 19:02:12 crc kubenswrapper[4792]: I1127 19:02:12.743825 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-z5rhr_57052bd6-e7c2-4ea0-bc6e-839ed4803541/manager/0.log" Nov 27 19:02:12 crc kubenswrapper[4792]: I1127 19:02:12.820306 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-fqb9p_cf051bf3-d415-40eb-8071-8f0509377c34/kube-rbac-proxy/0.log" Nov 27 19:02:12 crc kubenswrapper[4792]: I1127 19:02:12.821387 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-fqb9p_cf051bf3-d415-40eb-8071-8f0509377c34/manager/0.log" Nov 27 19:02:13 crc kubenswrapper[4792]: I1127 19:02:13.046270 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-q87xm_4d308a9c-7874-457f-a97f-4bb784a11783/operator/0.log" Nov 27 19:02:13 crc kubenswrapper[4792]: I1127 19:02:13.080389 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-75zc9_074f9cbe-fb30-4e1f-9156-ccc5100dcd3b/kube-rbac-proxy/0.log" Nov 27 19:02:13 crc kubenswrapper[4792]: I1127 19:02:13.306551 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-75zc9_074f9cbe-fb30-4e1f-9156-ccc5100dcd3b/manager/0.log" Nov 27 19:02:13 crc kubenswrapper[4792]: I1127 19:02:13.396957 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-ff79b6df5-jrwv2_4193b9b8-da59-42cf-94b2-a327608c59a6/kube-rbac-proxy/0.log" Nov 27 19:02:13 crc kubenswrapper[4792]: I1127 19:02:13.587613 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-8kj9s_de56fbe3-d4c6-430f-8b94-5136fbf4a79c/kube-rbac-proxy/0.log" Nov 27 19:02:13 crc kubenswrapper[4792]: I1127 19:02:13.705202 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-8kj9s_de56fbe3-d4c6-430f-8b94-5136fbf4a79c/manager/0.log" Nov 27 19:02:13 crc kubenswrapper[4792]: I1127 19:02:13.764461 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6644d5b8df-w6zdt_f1ef7f3c-052e-45e2-a51a-5d114d634c12/manager/0.log" Nov 27 19:02:13 crc kubenswrapper[4792]: I1127 19:02:13.773930 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-ff79b6df5-jrwv2_4193b9b8-da59-42cf-94b2-a327608c59a6/manager/0.log" Nov 27 19:02:13 crc kubenswrapper[4792]: I1127 19:02:13.816940 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-bvh8l_6f57d750-e016-4d78-bdbe-b9b1c5a21787/kube-rbac-proxy/0.log" Nov 27 19:02:13 crc kubenswrapper[4792]: I1127 19:02:13.893837 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-bvh8l_6f57d750-e016-4d78-bdbe-b9b1c5a21787/manager/0.log" Nov 27 19:02:32 crc kubenswrapper[4792]: I1127 19:02:32.133629 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-cx4g9_e0f375d2-c00a-49a3-963d-5d2bb71fa625/control-plane-machine-set-operator/0.log" Nov 27 19:02:32 crc kubenswrapper[4792]: I1127 19:02:32.204455 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6lwpq_7bbb7ab5-a68c-402c-99d8-9cd47c361ccd/kube-rbac-proxy/0.log" Nov 27 19:02:32 crc kubenswrapper[4792]: I1127 19:02:32.322928 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6lwpq_7bbb7ab5-a68c-402c-99d8-9cd47c361ccd/machine-api-operator/0.log" Nov 27 19:02:33 crc kubenswrapper[4792]: I1127 19:02:33.721886 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9vz8l"] Nov 27 19:02:33 crc kubenswrapper[4792]: E1127 19:02:33.722854 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c" containerName="keystone-cron" Nov 27 19:02:33 crc kubenswrapper[4792]: I1127 19:02:33.722876 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c" containerName="keystone-cron" Nov 27 19:02:33 crc kubenswrapper[4792]: I1127 19:02:33.723223 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d7fb1f4-b8d4-4628-9c34-4f8bb926b66c" containerName="keystone-cron" Nov 27 19:02:33 crc kubenswrapper[4792]: I1127 19:02:33.725450 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:33 crc kubenswrapper[4792]: I1127 19:02:33.739976 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vz8l"] Nov 27 19:02:33 crc kubenswrapper[4792]: I1127 19:02:33.762444 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqck9\" (UniqueName: \"kubernetes.io/projected/d8ac6935-056e-4fa0-97a5-83c53b534fa1-kube-api-access-qqck9\") pod \"redhat-marketplace-9vz8l\" (UID: \"d8ac6935-056e-4fa0-97a5-83c53b534fa1\") " pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:33 crc kubenswrapper[4792]: I1127 19:02:33.762517 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8ac6935-056e-4fa0-97a5-83c53b534fa1-utilities\") pod \"redhat-marketplace-9vz8l\" (UID: \"d8ac6935-056e-4fa0-97a5-83c53b534fa1\") " pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:33 crc kubenswrapper[4792]: I1127 19:02:33.762693 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8ac6935-056e-4fa0-97a5-83c53b534fa1-catalog-content\") pod \"redhat-marketplace-9vz8l\" (UID: \"d8ac6935-056e-4fa0-97a5-83c53b534fa1\") " pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:33 crc kubenswrapper[4792]: I1127 19:02:33.864175 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqck9\" (UniqueName: \"kubernetes.io/projected/d8ac6935-056e-4fa0-97a5-83c53b534fa1-kube-api-access-qqck9\") pod \"redhat-marketplace-9vz8l\" (UID: \"d8ac6935-056e-4fa0-97a5-83c53b534fa1\") " pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:33 crc kubenswrapper[4792]: I1127 19:02:33.864543 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8ac6935-056e-4fa0-97a5-83c53b534fa1-utilities\") pod \"redhat-marketplace-9vz8l\" (UID: \"d8ac6935-056e-4fa0-97a5-83c53b534fa1\") " pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:33 crc kubenswrapper[4792]: I1127 19:02:33.864661 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8ac6935-056e-4fa0-97a5-83c53b534fa1-catalog-content\") pod \"redhat-marketplace-9vz8l\" (UID: \"d8ac6935-056e-4fa0-97a5-83c53b534fa1\") " pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:33 crc kubenswrapper[4792]: I1127 19:02:33.865213 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8ac6935-056e-4fa0-97a5-83c53b534fa1-utilities\") pod \"redhat-marketplace-9vz8l\" (UID: \"d8ac6935-056e-4fa0-97a5-83c53b534fa1\") " pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:33 crc kubenswrapper[4792]: I1127 19:02:33.867522 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8ac6935-056e-4fa0-97a5-83c53b534fa1-catalog-content\") pod \"redhat-marketplace-9vz8l\" (UID: \"d8ac6935-056e-4fa0-97a5-83c53b534fa1\") " pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:33 crc kubenswrapper[4792]: I1127 19:02:33.883836 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqck9\" (UniqueName: \"kubernetes.io/projected/d8ac6935-056e-4fa0-97a5-83c53b534fa1-kube-api-access-qqck9\") pod \"redhat-marketplace-9vz8l\" (UID: \"d8ac6935-056e-4fa0-97a5-83c53b534fa1\") " pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:34 crc kubenswrapper[4792]: I1127 19:02:34.062195 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:35 crc kubenswrapper[4792]: I1127 19:02:35.003239 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vz8l"] Nov 27 19:02:35 crc kubenswrapper[4792]: I1127 19:02:35.853253 4792 generic.go:334] "Generic (PLEG): container finished" podID="d8ac6935-056e-4fa0-97a5-83c53b534fa1" containerID="9114f6902fcf55eff1cb56aec9377b7e69b711b73a7bf7e44ee4d121deb5e38e" exitCode=0 Nov 27 19:02:35 crc kubenswrapper[4792]: I1127 19:02:35.853324 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vz8l" event={"ID":"d8ac6935-056e-4fa0-97a5-83c53b534fa1","Type":"ContainerDied","Data":"9114f6902fcf55eff1cb56aec9377b7e69b711b73a7bf7e44ee4d121deb5e38e"} Nov 27 19:02:35 crc kubenswrapper[4792]: I1127 19:02:35.853620 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vz8l" event={"ID":"d8ac6935-056e-4fa0-97a5-83c53b534fa1","Type":"ContainerStarted","Data":"68f6c9a6651f5abe0580c5bb81a7a2b5e125156aea002f77ec3b32bc5fbfff57"} Nov 27 19:02:35 crc kubenswrapper[4792]: I1127 19:02:35.856569 4792 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 27 19:02:36 crc kubenswrapper[4792]: I1127 19:02:36.865578 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vz8l" event={"ID":"d8ac6935-056e-4fa0-97a5-83c53b534fa1","Type":"ContainerStarted","Data":"7d9a5ccfd561814fbae4309946e277115124891588b7b937fecafb1125395a5c"} Nov 27 19:02:37 crc kubenswrapper[4792]: E1127 19:02:37.545746 4792 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8ac6935_056e_4fa0_97a5_83c53b534fa1.slice/crio-conmon-7d9a5ccfd561814fbae4309946e277115124891588b7b937fecafb1125395a5c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8ac6935_056e_4fa0_97a5_83c53b534fa1.slice/crio-7d9a5ccfd561814fbae4309946e277115124891588b7b937fecafb1125395a5c.scope\": RecentStats: unable to find data in memory cache]" Nov 27 19:02:37 crc kubenswrapper[4792]: I1127 19:02:37.879142 4792 generic.go:334] "Generic (PLEG): container finished" podID="d8ac6935-056e-4fa0-97a5-83c53b534fa1" containerID="7d9a5ccfd561814fbae4309946e277115124891588b7b937fecafb1125395a5c" exitCode=0 Nov 27 19:02:37 crc kubenswrapper[4792]: I1127 19:02:37.879195 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vz8l" event={"ID":"d8ac6935-056e-4fa0-97a5-83c53b534fa1","Type":"ContainerDied","Data":"7d9a5ccfd561814fbae4309946e277115124891588b7b937fecafb1125395a5c"} Nov 27 19:02:39 crc kubenswrapper[4792]: I1127 19:02:39.911975 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vz8l" event={"ID":"d8ac6935-056e-4fa0-97a5-83c53b534fa1","Type":"ContainerStarted","Data":"f3be875fda577360b8af6b7f5661b246a3a93875745619c6246a32cf3635637b"} Nov 27 19:02:39 crc kubenswrapper[4792]: I1127 19:02:39.932230 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9vz8l" podStartSLOduration=4.138674191 podStartE2EDuration="6.932206734s" podCreationTimestamp="2025-11-27 19:02:33 +0000 UTC" firstStartedPulling="2025-11-27 19:02:35.856126058 +0000 UTC m=+6778.198952396" lastFinishedPulling="2025-11-27 19:02:38.649658621 +0000 UTC m=+6780.992484939" observedRunningTime="2025-11-27 19:02:39.927757593 +0000 UTC m=+6782.270583901" watchObservedRunningTime="2025-11-27 19:02:39.932206734 +0000 UTC m=+6782.275033052" Nov 27 19:02:44 crc kubenswrapper[4792]: I1127 19:02:44.062848 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:44 crc kubenswrapper[4792]: I1127 19:02:44.063439 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:44 crc kubenswrapper[4792]: I1127 19:02:44.114481 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:44 crc kubenswrapper[4792]: I1127 19:02:44.831913 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-r7tdl_d6ba8d56-25bc-4fa4-bbeb-8412cf566d8f/cert-manager-controller/0.log" Nov 27 19:02:44 crc kubenswrapper[4792]: I1127 19:02:44.978854 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-zk24k_ef16aa33-7753-41e0-b78f-533ea2f2dd76/cert-manager-cainjector/0.log" Nov 27 19:02:45 crc kubenswrapper[4792]: I1127 19:02:45.021957 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:45 crc kubenswrapper[4792]: I1127 19:02:45.056178 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-l4gfl_3d4b582d-f8e6-477c-be1e-36f53bbc52e5/cert-manager-webhook/0.log" Nov 27 19:02:45 crc kubenswrapper[4792]: I1127 19:02:45.072386 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vz8l"] Nov 27 19:02:46 crc kubenswrapper[4792]: I1127 19:02:46.981338 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9vz8l" podUID="d8ac6935-056e-4fa0-97a5-83c53b534fa1" containerName="registry-server" containerID="cri-o://f3be875fda577360b8af6b7f5661b246a3a93875745619c6246a32cf3635637b" gracePeriod=2 Nov 27 19:02:47 crc kubenswrapper[4792]: I1127 19:02:47.504911 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:47 crc kubenswrapper[4792]: I1127 19:02:47.584772 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8ac6935-056e-4fa0-97a5-83c53b534fa1-catalog-content\") pod \"d8ac6935-056e-4fa0-97a5-83c53b534fa1\" (UID: \"d8ac6935-056e-4fa0-97a5-83c53b534fa1\") " Nov 27 19:02:47 crc kubenswrapper[4792]: I1127 19:02:47.585208 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8ac6935-056e-4fa0-97a5-83c53b534fa1-utilities\") pod \"d8ac6935-056e-4fa0-97a5-83c53b534fa1\" (UID: \"d8ac6935-056e-4fa0-97a5-83c53b534fa1\") " Nov 27 19:02:47 crc kubenswrapper[4792]: I1127 19:02:47.585635 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqck9\" (UniqueName: \"kubernetes.io/projected/d8ac6935-056e-4fa0-97a5-83c53b534fa1-kube-api-access-qqck9\") pod \"d8ac6935-056e-4fa0-97a5-83c53b534fa1\" (UID: \"d8ac6935-056e-4fa0-97a5-83c53b534fa1\") " Nov 27 19:02:47 crc kubenswrapper[4792]: I1127 19:02:47.585838 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8ac6935-056e-4fa0-97a5-83c53b534fa1-utilities" (OuterVolumeSpecName: "utilities") pod "d8ac6935-056e-4fa0-97a5-83c53b534fa1" (UID: "d8ac6935-056e-4fa0-97a5-83c53b534fa1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 19:02:47 crc kubenswrapper[4792]: I1127 19:02:47.586569 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8ac6935-056e-4fa0-97a5-83c53b534fa1-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 19:02:47 crc kubenswrapper[4792]: I1127 19:02:47.592832 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ac6935-056e-4fa0-97a5-83c53b534fa1-kube-api-access-qqck9" (OuterVolumeSpecName: "kube-api-access-qqck9") pod "d8ac6935-056e-4fa0-97a5-83c53b534fa1" (UID: "d8ac6935-056e-4fa0-97a5-83c53b534fa1"). InnerVolumeSpecName "kube-api-access-qqck9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 19:02:47 crc kubenswrapper[4792]: I1127 19:02:47.601134 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8ac6935-056e-4fa0-97a5-83c53b534fa1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8ac6935-056e-4fa0-97a5-83c53b534fa1" (UID: "d8ac6935-056e-4fa0-97a5-83c53b534fa1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 19:02:47 crc kubenswrapper[4792]: I1127 19:02:47.688513 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqck9\" (UniqueName: \"kubernetes.io/projected/d8ac6935-056e-4fa0-97a5-83c53b534fa1-kube-api-access-qqck9\") on node \"crc\" DevicePath \"\"" Nov 27 19:02:47 crc kubenswrapper[4792]: I1127 19:02:47.688540 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8ac6935-056e-4fa0-97a5-83c53b534fa1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 19:02:47 crc kubenswrapper[4792]: I1127 19:02:47.993522 4792 generic.go:334] "Generic (PLEG): container finished" podID="d8ac6935-056e-4fa0-97a5-83c53b534fa1" containerID="f3be875fda577360b8af6b7f5661b246a3a93875745619c6246a32cf3635637b" exitCode=0 Nov 27 19:02:47 crc kubenswrapper[4792]: I1127 19:02:47.993695 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vz8l" event={"ID":"d8ac6935-056e-4fa0-97a5-83c53b534fa1","Type":"ContainerDied","Data":"f3be875fda577360b8af6b7f5661b246a3a93875745619c6246a32cf3635637b"} Nov 27 19:02:47 crc kubenswrapper[4792]: I1127 19:02:47.993904 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9vz8l" event={"ID":"d8ac6935-056e-4fa0-97a5-83c53b534fa1","Type":"ContainerDied","Data":"68f6c9a6651f5abe0580c5bb81a7a2b5e125156aea002f77ec3b32bc5fbfff57"} Nov 27 19:02:47 crc kubenswrapper[4792]: I1127 19:02:47.993931 4792 scope.go:117] "RemoveContainer" containerID="f3be875fda577360b8af6b7f5661b246a3a93875745619c6246a32cf3635637b" Nov 27 19:02:47 crc kubenswrapper[4792]: I1127 19:02:47.993780 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9vz8l" Nov 27 19:02:48 crc kubenswrapper[4792]: I1127 19:02:48.029480 4792 scope.go:117] "RemoveContainer" containerID="7d9a5ccfd561814fbae4309946e277115124891588b7b937fecafb1125395a5c" Nov 27 19:02:48 crc kubenswrapper[4792]: I1127 19:02:48.038822 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vz8l"] Nov 27 19:02:48 crc kubenswrapper[4792]: I1127 19:02:48.051041 4792 scope.go:117] "RemoveContainer" containerID="9114f6902fcf55eff1cb56aec9377b7e69b711b73a7bf7e44ee4d121deb5e38e" Nov 27 19:02:48 crc kubenswrapper[4792]: I1127 19:02:48.052305 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9vz8l"] Nov 27 19:02:48 crc kubenswrapper[4792]: I1127 19:02:48.111590 4792 scope.go:117] "RemoveContainer" containerID="f3be875fda577360b8af6b7f5661b246a3a93875745619c6246a32cf3635637b" Nov 27 19:02:48 crc kubenswrapper[4792]: E1127 19:02:48.113241 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3be875fda577360b8af6b7f5661b246a3a93875745619c6246a32cf3635637b\": container with ID starting with f3be875fda577360b8af6b7f5661b246a3a93875745619c6246a32cf3635637b not found: ID does not exist" containerID="f3be875fda577360b8af6b7f5661b246a3a93875745619c6246a32cf3635637b" Nov 27 19:02:48 crc kubenswrapper[4792]: I1127 19:02:48.113284 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3be875fda577360b8af6b7f5661b246a3a93875745619c6246a32cf3635637b"} err="failed to get container status \"f3be875fda577360b8af6b7f5661b246a3a93875745619c6246a32cf3635637b\": rpc error: code = NotFound desc = could not find container \"f3be875fda577360b8af6b7f5661b246a3a93875745619c6246a32cf3635637b\": container with ID starting with f3be875fda577360b8af6b7f5661b246a3a93875745619c6246a32cf3635637b not found: ID does not exist" Nov 27 19:02:48 crc kubenswrapper[4792]: I1127 19:02:48.113307 4792 scope.go:117] "RemoveContainer" containerID="7d9a5ccfd561814fbae4309946e277115124891588b7b937fecafb1125395a5c" Nov 27 19:02:48 crc kubenswrapper[4792]: E1127 19:02:48.113930 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d9a5ccfd561814fbae4309946e277115124891588b7b937fecafb1125395a5c\": container with ID starting with 7d9a5ccfd561814fbae4309946e277115124891588b7b937fecafb1125395a5c not found: ID does not exist" containerID="7d9a5ccfd561814fbae4309946e277115124891588b7b937fecafb1125395a5c" Nov 27 19:02:48 crc kubenswrapper[4792]: I1127 19:02:48.113980 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d9a5ccfd561814fbae4309946e277115124891588b7b937fecafb1125395a5c"} err="failed to get container status \"7d9a5ccfd561814fbae4309946e277115124891588b7b937fecafb1125395a5c\": rpc error: code = NotFound desc = could not find container \"7d9a5ccfd561814fbae4309946e277115124891588b7b937fecafb1125395a5c\": container with ID starting with 7d9a5ccfd561814fbae4309946e277115124891588b7b937fecafb1125395a5c not found: ID does not exist" Nov 27 19:02:48 crc kubenswrapper[4792]: I1127 19:02:48.114009 4792 scope.go:117] "RemoveContainer" containerID="9114f6902fcf55eff1cb56aec9377b7e69b711b73a7bf7e44ee4d121deb5e38e" Nov 27 19:02:48 crc kubenswrapper[4792]: E1127 19:02:48.114968 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9114f6902fcf55eff1cb56aec9377b7e69b711b73a7bf7e44ee4d121deb5e38e\": container with ID starting with 9114f6902fcf55eff1cb56aec9377b7e69b711b73a7bf7e44ee4d121deb5e38e not found: ID does not exist" containerID="9114f6902fcf55eff1cb56aec9377b7e69b711b73a7bf7e44ee4d121deb5e38e" Nov 27 19:02:48 crc kubenswrapper[4792]: I1127 19:02:48.115001 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9114f6902fcf55eff1cb56aec9377b7e69b711b73a7bf7e44ee4d121deb5e38e"} err="failed to get container status \"9114f6902fcf55eff1cb56aec9377b7e69b711b73a7bf7e44ee4d121deb5e38e\": rpc error: code = NotFound desc = could not find container \"9114f6902fcf55eff1cb56aec9377b7e69b711b73a7bf7e44ee4d121deb5e38e\": container with ID starting with 9114f6902fcf55eff1cb56aec9377b7e69b711b73a7bf7e44ee4d121deb5e38e not found: ID does not exist" Nov 27 19:02:48 crc kubenswrapper[4792]: I1127 19:02:48.699885 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8ac6935-056e-4fa0-97a5-83c53b534fa1" path="/var/lib/kubelet/pods/d8ac6935-056e-4fa0-97a5-83c53b534fa1/volumes" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.533569 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f8bwp"] Nov 27 19:02:50 crc kubenswrapper[4792]: E1127 19:02:50.535073 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ac6935-056e-4fa0-97a5-83c53b534fa1" containerName="extract-utilities" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.535088 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ac6935-056e-4fa0-97a5-83c53b534fa1" containerName="extract-utilities" Nov 27 19:02:50 crc kubenswrapper[4792]: E1127 19:02:50.535119 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ac6935-056e-4fa0-97a5-83c53b534fa1" containerName="extract-content" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.535125 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ac6935-056e-4fa0-97a5-83c53b534fa1" containerName="extract-content" Nov 27 19:02:50 crc kubenswrapper[4792]: E1127 19:02:50.535149 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ac6935-056e-4fa0-97a5-83c53b534fa1" containerName="registry-server" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.535156 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ac6935-056e-4fa0-97a5-83c53b534fa1" containerName="registry-server" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.535414 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ac6935-056e-4fa0-97a5-83c53b534fa1" containerName="registry-server" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.537834 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.549705 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f8bwp"] Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.660538 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-catalog-content\") pod \"certified-operators-f8bwp\" (UID: \"f6ac7d95-1b0f-426b-9fe4-cc50146638f8\") " pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.660603 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-utilities\") pod \"certified-operators-f8bwp\" (UID: \"f6ac7d95-1b0f-426b-9fe4-cc50146638f8\") " pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.660674 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzxwc\" (UniqueName: \"kubernetes.io/projected/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-kube-api-access-qzxwc\") pod \"certified-operators-f8bwp\" (UID: \"f6ac7d95-1b0f-426b-9fe4-cc50146638f8\") " pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.762926 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-catalog-content\") pod \"certified-operators-f8bwp\" (UID: \"f6ac7d95-1b0f-426b-9fe4-cc50146638f8\") " pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.763222 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-utilities\") pod \"certified-operators-f8bwp\" (UID: \"f6ac7d95-1b0f-426b-9fe4-cc50146638f8\") " pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.763248 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzxwc\" (UniqueName: \"kubernetes.io/projected/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-kube-api-access-qzxwc\") pod \"certified-operators-f8bwp\" (UID: \"f6ac7d95-1b0f-426b-9fe4-cc50146638f8\") " pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.763528 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-catalog-content\") pod \"certified-operators-f8bwp\" (UID: \"f6ac7d95-1b0f-426b-9fe4-cc50146638f8\") " pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.763589 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-utilities\") pod \"certified-operators-f8bwp\" (UID: \"f6ac7d95-1b0f-426b-9fe4-cc50146638f8\") " pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.784308 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzxwc\" (UniqueName: \"kubernetes.io/projected/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-kube-api-access-qzxwc\") pod \"certified-operators-f8bwp\" (UID: \"f6ac7d95-1b0f-426b-9fe4-cc50146638f8\") " pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:02:50 crc kubenswrapper[4792]: I1127 19:02:50.859834 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:02:51 crc kubenswrapper[4792]: I1127 19:02:51.526126 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f8bwp"] Nov 27 19:02:52 crc kubenswrapper[4792]: I1127 19:02:52.041002 4792 generic.go:334] "Generic (PLEG): container finished" podID="f6ac7d95-1b0f-426b-9fe4-cc50146638f8" containerID="66497af4afcc5fd423cdd089d8fff8ebac3e493dfc21c07293ccb1c771d7bfd9" exitCode=0 Nov 27 19:02:52 crc kubenswrapper[4792]: I1127 19:02:52.041074 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8bwp" event={"ID":"f6ac7d95-1b0f-426b-9fe4-cc50146638f8","Type":"ContainerDied","Data":"66497af4afcc5fd423cdd089d8fff8ebac3e493dfc21c07293ccb1c771d7bfd9"} Nov 27 19:02:52 crc kubenswrapper[4792]: I1127 19:02:52.041333 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8bwp" event={"ID":"f6ac7d95-1b0f-426b-9fe4-cc50146638f8","Type":"ContainerStarted","Data":"76480bb96de17aa106e316b157711256493072c1bc3cb69f3e35d11e87f4d906"} Nov 27 19:02:54 crc kubenswrapper[4792]: I1127 19:02:54.074634 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8bwp" event={"ID":"f6ac7d95-1b0f-426b-9fe4-cc50146638f8","Type":"ContainerStarted","Data":"90ef0d94a814b5d5686a4c41da52148b48cdb84212a72014c2dfd1812eb18eef"} Nov 27 19:02:55 crc kubenswrapper[4792]: I1127 19:02:55.090569 4792 generic.go:334] "Generic (PLEG): container finished" podID="f6ac7d95-1b0f-426b-9fe4-cc50146638f8" containerID="90ef0d94a814b5d5686a4c41da52148b48cdb84212a72014c2dfd1812eb18eef" exitCode=0 Nov 27 19:02:55 crc kubenswrapper[4792]: I1127 19:02:55.090637 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8bwp" event={"ID":"f6ac7d95-1b0f-426b-9fe4-cc50146638f8","Type":"ContainerDied","Data":"90ef0d94a814b5d5686a4c41da52148b48cdb84212a72014c2dfd1812eb18eef"} Nov 27 19:02:56 crc kubenswrapper[4792]: I1127 19:02:56.110122 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8bwp" event={"ID":"f6ac7d95-1b0f-426b-9fe4-cc50146638f8","Type":"ContainerStarted","Data":"11b6d2e134be77aa661afe8edc725cbff4c6308d511d87d4f8e85101b8e985aa"} Nov 27 19:02:56 crc kubenswrapper[4792]: I1127 19:02:56.133794 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f8bwp" podStartSLOduration=2.537696525 podStartE2EDuration="6.133778753s" podCreationTimestamp="2025-11-27 19:02:50 +0000 UTC" firstStartedPulling="2025-11-27 19:02:52.044861838 +0000 UTC m=+6794.387688156" lastFinishedPulling="2025-11-27 19:02:55.640944066 +0000 UTC m=+6797.983770384" observedRunningTime="2025-11-27 19:02:56.130963853 +0000 UTC m=+6798.473790171" watchObservedRunningTime="2025-11-27 19:02:56.133778753 +0000 UTC m=+6798.476605071" Nov 27 19:02:58 crc kubenswrapper[4792]: I1127 19:02:58.499315 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-th928_5131ddcc-b3d4-4df4-9474-19896fb63573/nmstate-console-plugin/0.log" Nov 27 19:02:58 crc kubenswrapper[4792]: I1127 19:02:58.681512 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7z5qw_074e28a6-e1f1-43d3-b34a-b2d8c143f8af/nmstate-handler/0.log" Nov 27 19:02:58 crc kubenswrapper[4792]: I1127 19:02:58.739358 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-r6wgp_a21d5243-150d-488b-9cf2-ab95ee2732e6/nmstate-metrics/0.log" Nov 27 19:02:58 crc kubenswrapper[4792]: I1127 19:02:58.747865 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-r6wgp_a21d5243-150d-488b-9cf2-ab95ee2732e6/kube-rbac-proxy/0.log" Nov 27 19:02:58 crc kubenswrapper[4792]: I1127 19:02:58.947307 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-j62cc_ded38fca-6b87-471c-ac68-423a6963dca6/nmstate-operator/0.log" Nov 27 19:02:58 crc kubenswrapper[4792]: I1127 19:02:58.978945 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-lmhfs_da86e440-8b68-4f21-bc7b-5cc71334ce5a/nmstate-webhook/0.log" Nov 27 19:03:00 crc kubenswrapper[4792]: I1127 19:03:00.861024 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:03:00 crc kubenswrapper[4792]: I1127 19:03:00.861337 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:03:00 crc kubenswrapper[4792]: I1127 19:03:00.918528 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:03:01 crc kubenswrapper[4792]: I1127 19:03:01.211951 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:03:01 crc kubenswrapper[4792]: I1127 19:03:01.262569 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f8bwp"] Nov 27 19:03:03 crc kubenswrapper[4792]: I1127 19:03:03.182372 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f8bwp" podUID="f6ac7d95-1b0f-426b-9fe4-cc50146638f8" containerName="registry-server" containerID="cri-o://11b6d2e134be77aa661afe8edc725cbff4c6308d511d87d4f8e85101b8e985aa" gracePeriod=2 Nov 27 19:03:04 crc kubenswrapper[4792]: I1127 19:03:04.200349 4792 generic.go:334] "Generic (PLEG): container finished" podID="f6ac7d95-1b0f-426b-9fe4-cc50146638f8" containerID="11b6d2e134be77aa661afe8edc725cbff4c6308d511d87d4f8e85101b8e985aa" exitCode=0 Nov 27 19:03:04 crc kubenswrapper[4792]: I1127 19:03:04.200828 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8bwp" event={"ID":"f6ac7d95-1b0f-426b-9fe4-cc50146638f8","Type":"ContainerDied","Data":"11b6d2e134be77aa661afe8edc725cbff4c6308d511d87d4f8e85101b8e985aa"} Nov 27 19:03:04 crc kubenswrapper[4792]: I1127 19:03:04.453576 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:03:04 crc kubenswrapper[4792]: I1127 19:03:04.620925 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-catalog-content\") pod \"f6ac7d95-1b0f-426b-9fe4-cc50146638f8\" (UID: \"f6ac7d95-1b0f-426b-9fe4-cc50146638f8\") " Nov 27 19:03:04 crc kubenswrapper[4792]: I1127 19:03:04.621216 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzxwc\" (UniqueName: \"kubernetes.io/projected/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-kube-api-access-qzxwc\") pod \"f6ac7d95-1b0f-426b-9fe4-cc50146638f8\" (UID: \"f6ac7d95-1b0f-426b-9fe4-cc50146638f8\") " Nov 27 19:03:04 crc kubenswrapper[4792]: I1127 19:03:04.621324 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-utilities\") pod \"f6ac7d95-1b0f-426b-9fe4-cc50146638f8\" (UID: \"f6ac7d95-1b0f-426b-9fe4-cc50146638f8\") " Nov 27 19:03:04 crc kubenswrapper[4792]: I1127 19:03:04.621848 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-utilities" (OuterVolumeSpecName: "utilities") pod "f6ac7d95-1b0f-426b-9fe4-cc50146638f8" (UID: "f6ac7d95-1b0f-426b-9fe4-cc50146638f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 19:03:04 crc kubenswrapper[4792]: I1127 19:03:04.635575 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-kube-api-access-qzxwc" (OuterVolumeSpecName: "kube-api-access-qzxwc") pod "f6ac7d95-1b0f-426b-9fe4-cc50146638f8" (UID: "f6ac7d95-1b0f-426b-9fe4-cc50146638f8"). InnerVolumeSpecName "kube-api-access-qzxwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 19:03:04 crc kubenswrapper[4792]: I1127 19:03:04.670161 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6ac7d95-1b0f-426b-9fe4-cc50146638f8" (UID: "f6ac7d95-1b0f-426b-9fe4-cc50146638f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 19:03:04 crc kubenswrapper[4792]: I1127 19:03:04.724120 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzxwc\" (UniqueName: \"kubernetes.io/projected/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-kube-api-access-qzxwc\") on node \"crc\" DevicePath \"\"" Nov 27 19:03:04 crc kubenswrapper[4792]: I1127 19:03:04.724158 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 19:03:04 crc kubenswrapper[4792]: I1127 19:03:04.724174 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6ac7d95-1b0f-426b-9fe4-cc50146638f8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 19:03:05 crc kubenswrapper[4792]: I1127 19:03:05.213689 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f8bwp" event={"ID":"f6ac7d95-1b0f-426b-9fe4-cc50146638f8","Type":"ContainerDied","Data":"76480bb96de17aa106e316b157711256493072c1bc3cb69f3e35d11e87f4d906"} Nov 27 19:03:05 crc kubenswrapper[4792]: I1127 19:03:05.213745 4792 scope.go:117] "RemoveContainer" containerID="11b6d2e134be77aa661afe8edc725cbff4c6308d511d87d4f8e85101b8e985aa" Nov 27 19:03:05 crc kubenswrapper[4792]: I1127 19:03:05.213921 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f8bwp" Nov 27 19:03:05 crc kubenswrapper[4792]: I1127 19:03:05.241863 4792 scope.go:117] "RemoveContainer" containerID="90ef0d94a814b5d5686a4c41da52148b48cdb84212a72014c2dfd1812eb18eef" Nov 27 19:03:05 crc kubenswrapper[4792]: I1127 19:03:05.253797 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f8bwp"] Nov 27 19:03:05 crc kubenswrapper[4792]: I1127 19:03:05.268468 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f8bwp"] Nov 27 19:03:05 crc kubenswrapper[4792]: I1127 19:03:05.302314 4792 scope.go:117] "RemoveContainer" containerID="66497af4afcc5fd423cdd089d8fff8ebac3e493dfc21c07293ccb1c771d7bfd9" Nov 27 19:03:06 crc kubenswrapper[4792]: I1127 19:03:06.699513 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ac7d95-1b0f-426b-9fe4-cc50146638f8" path="/var/lib/kubelet/pods/f6ac7d95-1b0f-426b-9fe4-cc50146638f8/volumes" Nov 27 19:03:08 crc kubenswrapper[4792]: I1127 19:03:08.290056 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 19:03:08 crc kubenswrapper[4792]: I1127 19:03:08.290365 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 19:03:12 crc kubenswrapper[4792]: I1127 19:03:12.728875 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5994f6989f-4s6cj_01e17fe3-0b99-4719-8a19-bdb45dabeaac/kube-rbac-proxy/0.log" Nov 27 19:03:12 crc kubenswrapper[4792]: I1127 19:03:12.815230 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5994f6989f-4s6cj_01e17fe3-0b99-4719-8a19-bdb45dabeaac/manager/0.log" Nov 27 19:03:27 crc kubenswrapper[4792]: I1127 19:03:27.365316 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-g2l98_286d6b8b-ff31-4c5c-84cf-9ec7bdece2a0/cluster-logging-operator/0.log" Nov 27 19:03:27 crc kubenswrapper[4792]: I1127 19:03:27.556726 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-kv4ll_b495d78f-2e10-4171-88ba-2ddb90195710/collector/0.log" Nov 27 19:03:27 crc kubenswrapper[4792]: I1127 19:03:27.588996 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_6a9851c2-362b-425e-adf3-5056cbbfb169/loki-compactor/0.log" Nov 27 19:03:27 crc kubenswrapper[4792]: I1127 19:03:27.775985 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767db5f6c6-qqqzr_5e5bb18c-7c60-4ec3-ac94-e33904750bb8/gateway/0.log" Nov 27 19:03:27 crc kubenswrapper[4792]: I1127 19:03:27.822550 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-dgnv7_b883f630-7c31-4a1a-9633-8770b40c5a69/loki-distributor/0.log" Nov 27 19:03:27 crc kubenswrapper[4792]: I1127 19:03:27.982272 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767db5f6c6-qqqzr_5e5bb18c-7c60-4ec3-ac94-e33904750bb8/opa/0.log" Nov 27 19:03:28 crc kubenswrapper[4792]: I1127 19:03:28.034950 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767db5f6c6-tqz78_8435b802-65cf-46a0-89fa-fa55e43dfb68/gateway/0.log" Nov 27 19:03:28 crc kubenswrapper[4792]: I1127 19:03:28.097272 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767db5f6c6-tqz78_8435b802-65cf-46a0-89fa-fa55e43dfb68/opa/0.log" Nov 27 19:03:28 crc kubenswrapper[4792]: I1127 19:03:28.294335 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_3b4c2851-5058-4cfc-9efa-a5d94e7e8090/loki-index-gateway/0.log" Nov 27 19:03:28 crc kubenswrapper[4792]: I1127 19:03:28.433936 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_2a318073-842f-45ff-b6df-bc0abc0d576b/loki-ingester/0.log" Nov 27 19:03:28 crc kubenswrapper[4792]: I1127 19:03:28.531100 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-9rscz_8f70d890-772f-49eb-9c3b-0553bc2349ca/loki-querier/0.log" Nov 27 19:03:28 crc kubenswrapper[4792]: I1127 19:03:28.648492 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-l69ln_1828379a-4323-4161-881c-cf67367db9d4/loki-query-frontend/0.log" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.290840 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.291376 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.492856 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gpxrn"] Nov 27 19:03:38 crc kubenswrapper[4792]: E1127 19:03:38.493528 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ac7d95-1b0f-426b-9fe4-cc50146638f8" containerName="registry-server" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.493545 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ac7d95-1b0f-426b-9fe4-cc50146638f8" containerName="registry-server" Nov 27 19:03:38 crc kubenswrapper[4792]: E1127 19:03:38.493564 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ac7d95-1b0f-426b-9fe4-cc50146638f8" containerName="extract-content" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.493570 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ac7d95-1b0f-426b-9fe4-cc50146638f8" containerName="extract-content" Nov 27 19:03:38 crc kubenswrapper[4792]: E1127 19:03:38.493592 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ac7d95-1b0f-426b-9fe4-cc50146638f8" containerName="extract-utilities" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.493599 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ac7d95-1b0f-426b-9fe4-cc50146638f8" containerName="extract-utilities" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.493859 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ac7d95-1b0f-426b-9fe4-cc50146638f8" containerName="registry-server" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.495807 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.529671 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gpxrn"] Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.634652 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3deb96f-d16c-4554-abdc-e4bf834711d1-catalog-content\") pod \"redhat-operators-gpxrn\" (UID: \"b3deb96f-d16c-4554-abdc-e4bf834711d1\") " pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.634746 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnz7d\" (UniqueName: \"kubernetes.io/projected/b3deb96f-d16c-4554-abdc-e4bf834711d1-kube-api-access-nnz7d\") pod \"redhat-operators-gpxrn\" (UID: \"b3deb96f-d16c-4554-abdc-e4bf834711d1\") " pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.635161 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3deb96f-d16c-4554-abdc-e4bf834711d1-utilities\") pod \"redhat-operators-gpxrn\" (UID: \"b3deb96f-d16c-4554-abdc-e4bf834711d1\") " pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.737058 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3deb96f-d16c-4554-abdc-e4bf834711d1-catalog-content\") pod \"redhat-operators-gpxrn\" (UID: \"b3deb96f-d16c-4554-abdc-e4bf834711d1\") " pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.737149 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnz7d\" (UniqueName: \"kubernetes.io/projected/b3deb96f-d16c-4554-abdc-e4bf834711d1-kube-api-access-nnz7d\") pod \"redhat-operators-gpxrn\" (UID: \"b3deb96f-d16c-4554-abdc-e4bf834711d1\") " pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.737250 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3deb96f-d16c-4554-abdc-e4bf834711d1-utilities\") pod \"redhat-operators-gpxrn\" (UID: \"b3deb96f-d16c-4554-abdc-e4bf834711d1\") " pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.737740 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3deb96f-d16c-4554-abdc-e4bf834711d1-utilities\") pod \"redhat-operators-gpxrn\" (UID: \"b3deb96f-d16c-4554-abdc-e4bf834711d1\") " pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.737947 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3deb96f-d16c-4554-abdc-e4bf834711d1-catalog-content\") pod \"redhat-operators-gpxrn\" (UID: \"b3deb96f-d16c-4554-abdc-e4bf834711d1\") " pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.769323 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnz7d\" (UniqueName: \"kubernetes.io/projected/b3deb96f-d16c-4554-abdc-e4bf834711d1-kube-api-access-nnz7d\") pod \"redhat-operators-gpxrn\" (UID: \"b3deb96f-d16c-4554-abdc-e4bf834711d1\") " pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:03:38 crc kubenswrapper[4792]: I1127 19:03:38.851290 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:03:39 crc kubenswrapper[4792]: I1127 19:03:39.431985 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gpxrn"] Nov 27 19:03:39 crc kubenswrapper[4792]: I1127 19:03:39.581717 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpxrn" event={"ID":"b3deb96f-d16c-4554-abdc-e4bf834711d1","Type":"ContainerStarted","Data":"ef5f8fec580d765a843ddaf1b161d7ec4e06164392e45a8deb6db623c410cd4e"} Nov 27 19:03:40 crc kubenswrapper[4792]: I1127 19:03:40.592143 4792 generic.go:334] "Generic (PLEG): container finished" podID="b3deb96f-d16c-4554-abdc-e4bf834711d1" containerID="8693700b583cf43b4f9f4939bc01ae3578eb84296db8bba81224f0b152b98a20" exitCode=0 Nov 27 19:03:40 crc kubenswrapper[4792]: I1127 19:03:40.592257 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpxrn" event={"ID":"b3deb96f-d16c-4554-abdc-e4bf834711d1","Type":"ContainerDied","Data":"8693700b583cf43b4f9f4939bc01ae3578eb84296db8bba81224f0b152b98a20"} Nov 27 19:03:41 crc kubenswrapper[4792]: I1127 19:03:41.605421 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpxrn" event={"ID":"b3deb96f-d16c-4554-abdc-e4bf834711d1","Type":"ContainerStarted","Data":"efb944fe497d0d75fed3d17a806f06cb9c559a72860df7bc1d75b698707c5c1a"} Nov 27 19:03:43 crc kubenswrapper[4792]: I1127 19:03:43.630415 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-b2ds9_0056c3c2-a1e5-4733-a428-fd3b91475472/kube-rbac-proxy/0.log" Nov 27 19:03:43 crc kubenswrapper[4792]: I1127 19:03:43.903759 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-frr-files/0.log" Nov 27 19:03:44 crc kubenswrapper[4792]: I1127 19:03:44.034921 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-frr-files/0.log" Nov 27 19:03:44 crc kubenswrapper[4792]: I1127 19:03:44.046487 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-b2ds9_0056c3c2-a1e5-4733-a428-fd3b91475472/controller/0.log" Nov 27 19:03:44 crc kubenswrapper[4792]: I1127 19:03:44.106345 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-reloader/0.log" Nov 27 19:03:44 crc kubenswrapper[4792]: I1127 19:03:44.108043 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-metrics/0.log" Nov 27 19:03:44 crc kubenswrapper[4792]: I1127 19:03:44.258930 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-reloader/0.log" Nov 27 19:03:44 crc kubenswrapper[4792]: I1127 19:03:44.516261 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-reloader/0.log" Nov 27 19:03:44 crc kubenswrapper[4792]: I1127 19:03:44.532693 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-frr-files/0.log" Nov 27 19:03:44 crc kubenswrapper[4792]: I1127 19:03:44.534796 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-metrics/0.log" Nov 27 19:03:44 crc kubenswrapper[4792]: I1127 19:03:44.541122 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-metrics/0.log" Nov 27 19:03:44 crc kubenswrapper[4792]: I1127 19:03:44.727862 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-frr-files/0.log" Nov 27 19:03:44 crc kubenswrapper[4792]: I1127 19:03:44.764984 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-metrics/0.log" Nov 27 19:03:44 crc kubenswrapper[4792]: I1127 19:03:44.766627 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/cp-reloader/0.log" Nov 27 19:03:44 crc kubenswrapper[4792]: I1127 19:03:44.831187 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/controller/0.log" Nov 27 19:03:44 crc kubenswrapper[4792]: I1127 19:03:44.967398 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/frr-metrics/0.log" Nov 27 19:03:45 crc kubenswrapper[4792]: I1127 19:03:45.061905 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/kube-rbac-proxy/0.log" Nov 27 19:03:45 crc kubenswrapper[4792]: I1127 19:03:45.098235 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/kube-rbac-proxy-frr/0.log" Nov 27 19:03:45 crc kubenswrapper[4792]: I1127 19:03:45.197381 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/reloader/0.log" Nov 27 19:03:45 crc kubenswrapper[4792]: I1127 19:03:45.297716 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-k5t27_1cebbd73-ff6c-46b4-8b96-da44b744dc66/frr-k8s-webhook-server/0.log" Nov 27 19:03:45 crc kubenswrapper[4792]: I1127 19:03:45.523269 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-fbff999dd-d7fwq_4de98138-86e1-4a92-84ff-4ef1a2a1d57b/manager/0.log" Nov 27 19:03:45 crc kubenswrapper[4792]: I1127 19:03:45.661612 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7844df848f-mmmmh_25e66971-1039-45a3-9010-17efb7f2dbf6/webhook-server/0.log" Nov 27 19:03:45 crc kubenswrapper[4792]: I1127 19:03:45.885238 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rptqb_ee78f3b0-9199-41a2-ad7a-64e175706386/kube-rbac-proxy/0.log" Nov 27 19:03:47 crc kubenswrapper[4792]: I1127 19:03:47.221931 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rptqb_ee78f3b0-9199-41a2-ad7a-64e175706386/speaker/0.log" Nov 27 19:03:47 crc kubenswrapper[4792]: I1127 19:03:47.374378 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhchz_a4f24305-d786-4537-b13b-86e83451bef4/frr/0.log" Nov 27 19:03:47 crc kubenswrapper[4792]: I1127 19:03:47.681267 4792 generic.go:334] "Generic (PLEG): container finished" podID="b3deb96f-d16c-4554-abdc-e4bf834711d1" containerID="efb944fe497d0d75fed3d17a806f06cb9c559a72860df7bc1d75b698707c5c1a" exitCode=0 Nov 27 19:03:47 crc kubenswrapper[4792]: I1127 19:03:47.681312 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpxrn" event={"ID":"b3deb96f-d16c-4554-abdc-e4bf834711d1","Type":"ContainerDied","Data":"efb944fe497d0d75fed3d17a806f06cb9c559a72860df7bc1d75b698707c5c1a"} Nov 27 19:03:48 crc kubenswrapper[4792]: I1127 19:03:48.707021 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpxrn" event={"ID":"b3deb96f-d16c-4554-abdc-e4bf834711d1","Type":"ContainerStarted","Data":"219048cdfd393edb18e2b366bc917efb5accf65f9c299d768c6de8b0cf24eea4"} Nov 27 19:03:48 crc kubenswrapper[4792]: I1127 19:03:48.731024 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gpxrn" podStartSLOduration=3.227172043 podStartE2EDuration="10.731005174s" podCreationTimestamp="2025-11-27 19:03:38 +0000 UTC" firstStartedPulling="2025-11-27 19:03:40.594588093 +0000 UTC m=+6842.937414411" lastFinishedPulling="2025-11-27 19:03:48.098421224 +0000 UTC m=+6850.441247542" observedRunningTime="2025-11-27 19:03:48.724180634 +0000 UTC m=+6851.067006962" watchObservedRunningTime="2025-11-27 19:03:48.731005174 +0000 UTC m=+6851.073831502" Nov 27 19:03:48 crc kubenswrapper[4792]: I1127 19:03:48.852310 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:03:48 crc kubenswrapper[4792]: I1127 19:03:48.852399 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:03:49 crc kubenswrapper[4792]: I1127 19:03:49.904692 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gpxrn" podUID="b3deb96f-d16c-4554-abdc-e4bf834711d1" containerName="registry-server" probeResult="failure" output=< Nov 27 19:03:49 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 19:03:49 crc kubenswrapper[4792]: > Nov 27 19:03:59 crc kubenswrapper[4792]: I1127 19:03:59.898103 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gpxrn" podUID="b3deb96f-d16c-4554-abdc-e4bf834711d1" containerName="registry-server" probeResult="failure" output=< Nov 27 19:03:59 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 19:03:59 crc kubenswrapper[4792]: > Nov 27 19:04:01 crc kubenswrapper[4792]: I1127 19:04:01.112043 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k_e6d145eb-d07b-4d67-955e-4a85351799d3/util/0.log" Nov 27 19:04:01 crc kubenswrapper[4792]: I1127 19:04:01.331000 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k_e6d145eb-d07b-4d67-955e-4a85351799d3/util/0.log" Nov 27 19:04:01 crc kubenswrapper[4792]: I1127 19:04:01.334821 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k_e6d145eb-d07b-4d67-955e-4a85351799d3/pull/0.log" Nov 27 19:04:01 crc kubenswrapper[4792]: I1127 19:04:01.376937 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k_e6d145eb-d07b-4d67-955e-4a85351799d3/pull/0.log" Nov 27 19:04:01 crc kubenswrapper[4792]: I1127 19:04:01.548001 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k_e6d145eb-d07b-4d67-955e-4a85351799d3/util/0.log" Nov 27 19:04:01 crc kubenswrapper[4792]: I1127 19:04:01.557174 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k_e6d145eb-d07b-4d67-955e-4a85351799d3/pull/0.log" Nov 27 19:04:01 crc kubenswrapper[4792]: I1127 19:04:01.602580 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb856n8k_e6d145eb-d07b-4d67-955e-4a85351799d3/extract/0.log" Nov 27 19:04:01 crc kubenswrapper[4792]: I1127 19:04:01.730146 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8_702456d4-256d-4792-bfb7-0389c6aa9726/util/0.log" Nov 27 19:04:01 crc kubenswrapper[4792]: I1127 19:04:01.957424 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8_702456d4-256d-4792-bfb7-0389c6aa9726/pull/0.log" Nov 27 19:04:01 crc kubenswrapper[4792]: I1127 19:04:01.976044 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8_702456d4-256d-4792-bfb7-0389c6aa9726/util/0.log" Nov 27 19:04:02 crc kubenswrapper[4792]: I1127 19:04:02.019150 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8_702456d4-256d-4792-bfb7-0389c6aa9726/pull/0.log" Nov 27 19:04:02 crc kubenswrapper[4792]: I1127 19:04:02.173443 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8_702456d4-256d-4792-bfb7-0389c6aa9726/pull/0.log" Nov 27 19:04:02 crc kubenswrapper[4792]: I1127 19:04:02.181077 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8_702456d4-256d-4792-bfb7-0389c6aa9726/extract/0.log" Nov 27 19:04:02 crc kubenswrapper[4792]: I1127 19:04:02.215693 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fltld8_702456d4-256d-4792-bfb7-0389c6aa9726/util/0.log" Nov 27 19:04:02 crc kubenswrapper[4792]: I1127 19:04:02.603890 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4_6a7fb352-ca11-42ed-9d3d-296e3747292f/util/0.log" Nov 27 19:04:02 crc kubenswrapper[4792]: I1127 19:04:02.765249 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4_6a7fb352-ca11-42ed-9d3d-296e3747292f/util/0.log" Nov 27 19:04:02 crc kubenswrapper[4792]: I1127 19:04:02.774626 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4_6a7fb352-ca11-42ed-9d3d-296e3747292f/pull/0.log" Nov 27 19:04:02 crc kubenswrapper[4792]: I1127 19:04:02.865455 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4_6a7fb352-ca11-42ed-9d3d-296e3747292f/pull/0.log" Nov 27 19:04:03 crc kubenswrapper[4792]: I1127 19:04:03.059997 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4_6a7fb352-ca11-42ed-9d3d-296e3747292f/pull/0.log" Nov 27 19:04:03 crc kubenswrapper[4792]: I1127 19:04:03.071259 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4_6a7fb352-ca11-42ed-9d3d-296e3747292f/util/0.log" Nov 27 19:04:03 crc kubenswrapper[4792]: I1127 19:04:03.079073 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210tdbq4_6a7fb352-ca11-42ed-9d3d-296e3747292f/extract/0.log" Nov 27 19:04:03 crc kubenswrapper[4792]: I1127 19:04:03.336239 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l_8bf2b066-d894-4413-af14-7e9e03e8d619/util/0.log" Nov 27 19:04:03 crc kubenswrapper[4792]: I1127 19:04:03.518673 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l_8bf2b066-d894-4413-af14-7e9e03e8d619/pull/0.log" Nov 27 19:04:03 crc kubenswrapper[4792]: I1127 19:04:03.546555 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l_8bf2b066-d894-4413-af14-7e9e03e8d619/util/0.log" Nov 27 19:04:03 crc kubenswrapper[4792]: I1127 19:04:03.590032 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l_8bf2b066-d894-4413-af14-7e9e03e8d619/pull/0.log" Nov 27 19:04:03 crc kubenswrapper[4792]: I1127 19:04:03.810805 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l_8bf2b066-d894-4413-af14-7e9e03e8d619/util/0.log" Nov 27 19:04:03 crc kubenswrapper[4792]: I1127 19:04:03.865271 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l_8bf2b066-d894-4413-af14-7e9e03e8d619/extract/0.log" Nov 27 19:04:03 crc kubenswrapper[4792]: I1127 19:04:03.893466 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f4667l_8bf2b066-d894-4413-af14-7e9e03e8d619/pull/0.log" Nov 27 19:04:04 crc kubenswrapper[4792]: I1127 19:04:04.050694 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l_e69faa79-91a5-4146-a048-598f6a9be342/util/0.log" Nov 27 19:04:04 crc kubenswrapper[4792]: I1127 19:04:04.307371 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l_e69faa79-91a5-4146-a048-598f6a9be342/util/0.log" Nov 27 19:04:04 crc kubenswrapper[4792]: I1127 19:04:04.360584 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l_e69faa79-91a5-4146-a048-598f6a9be342/pull/0.log" Nov 27 19:04:04 crc kubenswrapper[4792]: I1127 19:04:04.367890 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l_e69faa79-91a5-4146-a048-598f6a9be342/pull/0.log" Nov 27 19:04:04 crc kubenswrapper[4792]: I1127 19:04:04.548416 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l_e69faa79-91a5-4146-a048-598f6a9be342/pull/0.log" Nov 27 19:04:04 crc kubenswrapper[4792]: I1127 19:04:04.560031 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l_e69faa79-91a5-4146-a048-598f6a9be342/util/0.log" Nov 27 19:04:04 crc kubenswrapper[4792]: I1127 19:04:04.658678 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83h5j5l_e69faa79-91a5-4146-a048-598f6a9be342/extract/0.log" Nov 27 19:04:04 crc kubenswrapper[4792]: I1127 19:04:04.761800 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t2tmt_fad170db-00fe-471f-b3b3-0201e1b54c21/extract-utilities/0.log" Nov 27 19:04:04 crc kubenswrapper[4792]: I1127 19:04:04.921331 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t2tmt_fad170db-00fe-471f-b3b3-0201e1b54c21/extract-utilities/0.log" Nov 27 19:04:04 crc kubenswrapper[4792]: I1127 19:04:04.952902 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t2tmt_fad170db-00fe-471f-b3b3-0201e1b54c21/extract-content/0.log" Nov 27 19:04:05 crc kubenswrapper[4792]: I1127 19:04:05.023677 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t2tmt_fad170db-00fe-471f-b3b3-0201e1b54c21/extract-content/0.log" Nov 27 19:04:05 crc kubenswrapper[4792]: I1127 19:04:05.171992 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t2tmt_fad170db-00fe-471f-b3b3-0201e1b54c21/extract-utilities/0.log" Nov 27 19:04:05 crc kubenswrapper[4792]: I1127 19:04:05.176747 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t2tmt_fad170db-00fe-471f-b3b3-0201e1b54c21/extract-content/0.log" Nov 27 19:04:05 crc kubenswrapper[4792]: I1127 19:04:05.351790 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhqbd_83213a0e-ad1a-4969-bc77-731e9951f0e9/extract-utilities/0.log" Nov 27 19:04:05 crc kubenswrapper[4792]: I1127 19:04:05.822940 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhqbd_83213a0e-ad1a-4969-bc77-731e9951f0e9/extract-content/0.log" Nov 27 19:04:05 crc kubenswrapper[4792]: I1127 19:04:05.826865 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhqbd_83213a0e-ad1a-4969-bc77-731e9951f0e9/extract-content/0.log" Nov 27 19:04:05 crc kubenswrapper[4792]: I1127 19:04:05.891544 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhqbd_83213a0e-ad1a-4969-bc77-731e9951f0e9/extract-utilities/0.log" Nov 27 19:04:06 crc kubenswrapper[4792]: I1127 19:04:06.088211 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhqbd_83213a0e-ad1a-4969-bc77-731e9951f0e9/extract-content/0.log" Nov 27 19:04:06 crc kubenswrapper[4792]: I1127 19:04:06.110086 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhqbd_83213a0e-ad1a-4969-bc77-731e9951f0e9/extract-utilities/0.log" Nov 27 19:04:06 crc kubenswrapper[4792]: I1127 19:04:06.189881 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-t2tmt_fad170db-00fe-471f-b3b3-0201e1b54c21/registry-server/0.log" Nov 27 19:04:06 crc kubenswrapper[4792]: I1127 19:04:06.397812 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wjl66_4cb69df9-1d51-439c-bb3c-c17bd951bde3/marketplace-operator/0.log" Nov 27 19:04:06 crc kubenswrapper[4792]: I1127 19:04:06.476984 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5rv5f_ac78acde-862f-4924-a4a1-59edc00f6ee5/extract-utilities/0.log" Nov 27 19:04:06 crc kubenswrapper[4792]: I1127 19:04:06.670898 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5rv5f_ac78acde-862f-4924-a4a1-59edc00f6ee5/extract-utilities/0.log" Nov 27 19:04:06 crc kubenswrapper[4792]: I1127 19:04:06.675872 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5rv5f_ac78acde-862f-4924-a4a1-59edc00f6ee5/extract-content/0.log" Nov 27 19:04:06 crc kubenswrapper[4792]: I1127 19:04:06.688493 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5rv5f_ac78acde-862f-4924-a4a1-59edc00f6ee5/extract-content/0.log" Nov 27 19:04:06 crc kubenswrapper[4792]: I1127 19:04:06.761561 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vhqbd_83213a0e-ad1a-4969-bc77-731e9951f0e9/registry-server/0.log" Nov 27 19:04:06 crc kubenswrapper[4792]: I1127 19:04:06.936777 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5rv5f_ac78acde-862f-4924-a4a1-59edc00f6ee5/extract-utilities/0.log" Nov 27 19:04:06 crc kubenswrapper[4792]: I1127 19:04:06.953800 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5rv5f_ac78acde-862f-4924-a4a1-59edc00f6ee5/extract-content/0.log" Nov 27 19:04:07 crc kubenswrapper[4792]: I1127 19:04:07.049831 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4k6w8_16cfc9e0-e90a-438f-9128-6d59f065695e/extract-utilities/0.log" Nov 27 19:04:07 crc kubenswrapper[4792]: I1127 19:04:07.226793 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5rv5f_ac78acde-862f-4924-a4a1-59edc00f6ee5/registry-server/0.log" Nov 27 19:04:07 crc kubenswrapper[4792]: I1127 19:04:07.227716 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4k6w8_16cfc9e0-e90a-438f-9128-6d59f065695e/extract-utilities/0.log" Nov 27 19:04:07 crc kubenswrapper[4792]: I1127 19:04:07.248756 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4k6w8_16cfc9e0-e90a-438f-9128-6d59f065695e/extract-content/0.log" Nov 27 19:04:07 crc kubenswrapper[4792]: I1127 19:04:07.295537 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4k6w8_16cfc9e0-e90a-438f-9128-6d59f065695e/extract-content/0.log" Nov 27 19:04:07 crc kubenswrapper[4792]: I1127 19:04:07.434130 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4k6w8_16cfc9e0-e90a-438f-9128-6d59f065695e/extract-content/0.log" Nov 27 19:04:07 crc kubenswrapper[4792]: I1127 19:04:07.492922 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4k6w8_16cfc9e0-e90a-438f-9128-6d59f065695e/extract-utilities/0.log" Nov 27 19:04:07 crc kubenswrapper[4792]: I1127 19:04:07.519374 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gpxrn_b3deb96f-d16c-4554-abdc-e4bf834711d1/extract-utilities/0.log" Nov 27 19:04:07 crc kubenswrapper[4792]: I1127 19:04:07.775248 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gpxrn_b3deb96f-d16c-4554-abdc-e4bf834711d1/extract-content/0.log" Nov 27 19:04:07 crc kubenswrapper[4792]: I1127 19:04:07.795565 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gpxrn_b3deb96f-d16c-4554-abdc-e4bf834711d1/extract-content/0.log" Nov 27 19:04:07 crc kubenswrapper[4792]: I1127 19:04:07.797221 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gpxrn_b3deb96f-d16c-4554-abdc-e4bf834711d1/extract-utilities/0.log" Nov 27 19:04:07 crc kubenswrapper[4792]: I1127 19:04:07.970917 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gpxrn_b3deb96f-d16c-4554-abdc-e4bf834711d1/extract-utilities/0.log" Nov 27 19:04:08 crc kubenswrapper[4792]: I1127 19:04:08.018871 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gpxrn_b3deb96f-d16c-4554-abdc-e4bf834711d1/extract-content/0.log" Nov 27 19:04:08 crc kubenswrapper[4792]: I1127 19:04:08.031700 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gpxrn_b3deb96f-d16c-4554-abdc-e4bf834711d1/registry-server/0.log" Nov 27 19:04:08 crc kubenswrapper[4792]: I1127 19:04:08.289977 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 19:04:08 crc kubenswrapper[4792]: I1127 19:04:08.290038 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 19:04:08 crc kubenswrapper[4792]: I1127 19:04:08.290088 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 19:04:08 crc kubenswrapper[4792]: I1127 19:04:08.290965 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7914bc2a1025b6f976e576c7aa0b7b22098e5a8a12f9f7496d8fdc76a25c346"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 19:04:08 crc kubenswrapper[4792]: I1127 19:04:08.291020 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://e7914bc2a1025b6f976e576c7aa0b7b22098e5a8a12f9f7496d8fdc76a25c346" gracePeriod=600 Nov 27 19:04:08 crc kubenswrapper[4792]: I1127 19:04:08.403474 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4k6w8_16cfc9e0-e90a-438f-9128-6d59f065695e/registry-server/0.log" Nov 27 19:04:08 crc kubenswrapper[4792]: I1127 19:04:08.946185 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="e7914bc2a1025b6f976e576c7aa0b7b22098e5a8a12f9f7496d8fdc76a25c346" exitCode=0 Nov 27 19:04:08 crc kubenswrapper[4792]: I1127 19:04:08.946916 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"e7914bc2a1025b6f976e576c7aa0b7b22098e5a8a12f9f7496d8fdc76a25c346"} Nov 27 19:04:08 crc kubenswrapper[4792]: I1127 19:04:08.947061 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerStarted","Data":"1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17"} Nov 27 19:04:08 crc kubenswrapper[4792]: I1127 19:04:08.947166 4792 scope.go:117] "RemoveContainer" containerID="bbcaf56fe3f39a14210d9c78e809ed19005404cfe0f627056d2660e9145da6ad" Nov 27 19:04:09 crc kubenswrapper[4792]: I1127 19:04:09.900247 4792 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gpxrn" podUID="b3deb96f-d16c-4554-abdc-e4bf834711d1" containerName="registry-server" probeResult="failure" output=< Nov 27 19:04:09 crc kubenswrapper[4792]: timeout: failed to connect service ":50051" within 1s Nov 27 19:04:09 crc kubenswrapper[4792]: > Nov 27 19:04:18 crc kubenswrapper[4792]: I1127 19:04:18.922396 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:04:18 crc kubenswrapper[4792]: I1127 19:04:18.976840 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:04:19 crc kubenswrapper[4792]: I1127 19:04:19.165351 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gpxrn"] Nov 27 19:04:20 crc kubenswrapper[4792]: I1127 19:04:20.061517 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gpxrn" podUID="b3deb96f-d16c-4554-abdc-e4bf834711d1" containerName="registry-server" containerID="cri-o://219048cdfd393edb18e2b366bc917efb5accf65f9c299d768c6de8b0cf24eea4" gracePeriod=2 Nov 27 19:04:20 crc kubenswrapper[4792]: I1127 19:04:20.381908 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-xwzqt_f6463a7b-af91-4c4a-b67c-10f17f30becd/prometheus-operator/0.log" Nov 27 19:04:20 crc kubenswrapper[4792]: I1127 19:04:20.594015 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56bc4884fd-9qlzw_deae3170-952b-45e4-9527-ce9b37f90359/prometheus-operator-admission-webhook/0.log" Nov 27 19:04:20 crc kubenswrapper[4792]: I1127 19:04:20.626172 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56bc4884fd-h7x9w_6be8d975-0d93-42ba-9184-21f36ab98ac9/prometheus-operator-admission-webhook/0.log" Nov 27 19:04:20 crc kubenswrapper[4792]: I1127 19:04:20.645566 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:04:20 crc kubenswrapper[4792]: I1127 19:04:20.805071 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnz7d\" (UniqueName: \"kubernetes.io/projected/b3deb96f-d16c-4554-abdc-e4bf834711d1-kube-api-access-nnz7d\") pod \"b3deb96f-d16c-4554-abdc-e4bf834711d1\" (UID: \"b3deb96f-d16c-4554-abdc-e4bf834711d1\") " Nov 27 19:04:20 crc kubenswrapper[4792]: I1127 19:04:20.805115 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3deb96f-d16c-4554-abdc-e4bf834711d1-catalog-content\") pod \"b3deb96f-d16c-4554-abdc-e4bf834711d1\" (UID: \"b3deb96f-d16c-4554-abdc-e4bf834711d1\") " Nov 27 19:04:20 crc kubenswrapper[4792]: I1127 19:04:20.805217 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3deb96f-d16c-4554-abdc-e4bf834711d1-utilities\") pod \"b3deb96f-d16c-4554-abdc-e4bf834711d1\" (UID: \"b3deb96f-d16c-4554-abdc-e4bf834711d1\") " Nov 27 19:04:20 crc kubenswrapper[4792]: I1127 19:04:20.808348 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3deb96f-d16c-4554-abdc-e4bf834711d1-utilities" (OuterVolumeSpecName: "utilities") pod "b3deb96f-d16c-4554-abdc-e4bf834711d1" (UID: "b3deb96f-d16c-4554-abdc-e4bf834711d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 19:04:20 crc kubenswrapper[4792]: I1127 19:04:20.815286 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-qdcnd_b4f46dd1-954f-497f-b491-a3df62aafda6/operator/0.log" Nov 27 19:04:20 crc kubenswrapper[4792]: I1127 19:04:20.829118 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3deb96f-d16c-4554-abdc-e4bf834711d1-kube-api-access-nnz7d" (OuterVolumeSpecName: "kube-api-access-nnz7d") pod "b3deb96f-d16c-4554-abdc-e4bf834711d1" (UID: "b3deb96f-d16c-4554-abdc-e4bf834711d1"). InnerVolumeSpecName "kube-api-access-nnz7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 19:04:20 crc kubenswrapper[4792]: I1127 19:04:20.907964 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnz7d\" (UniqueName: \"kubernetes.io/projected/b3deb96f-d16c-4554-abdc-e4bf834711d1-kube-api-access-nnz7d\") on node \"crc\" DevicePath \"\"" Nov 27 19:04:20 crc kubenswrapper[4792]: I1127 19:04:20.908257 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3deb96f-d16c-4554-abdc-e4bf834711d1-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 19:04:20 crc kubenswrapper[4792]: I1127 19:04:20.918845 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-prnwq_6df6ef32-ac48-4c52-9c23-95926cf8c67d/observability-ui-dashboards/0.log" Nov 27 19:04:20 crc kubenswrapper[4792]: I1127 19:04:20.921164 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3deb96f-d16c-4554-abdc-e4bf834711d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3deb96f-d16c-4554-abdc-e4bf834711d1" (UID: "b3deb96f-d16c-4554-abdc-e4bf834711d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.011016 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3deb96f-d16c-4554-abdc-e4bf834711d1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.024979 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-pjkg6_44e4a3bf-3593-4c1e-b9cc-4c294ed26692/perses-operator/0.log" Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.077457 4792 generic.go:334] "Generic (PLEG): container finished" podID="b3deb96f-d16c-4554-abdc-e4bf834711d1" containerID="219048cdfd393edb18e2b366bc917efb5accf65f9c299d768c6de8b0cf24eea4" exitCode=0 Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.077504 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpxrn" event={"ID":"b3deb96f-d16c-4554-abdc-e4bf834711d1","Type":"ContainerDied","Data":"219048cdfd393edb18e2b366bc917efb5accf65f9c299d768c6de8b0cf24eea4"} Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.077531 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpxrn" event={"ID":"b3deb96f-d16c-4554-abdc-e4bf834711d1","Type":"ContainerDied","Data":"ef5f8fec580d765a843ddaf1b161d7ec4e06164392e45a8deb6db623c410cd4e"} Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.077561 4792 scope.go:117] "RemoveContainer" containerID="219048cdfd393edb18e2b366bc917efb5accf65f9c299d768c6de8b0cf24eea4" Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.077568 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpxrn" Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.102923 4792 scope.go:117] "RemoveContainer" containerID="efb944fe497d0d75fed3d17a806f06cb9c559a72860df7bc1d75b698707c5c1a" Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.119914 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gpxrn"] Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.134424 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gpxrn"] Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.140539 4792 scope.go:117] "RemoveContainer" containerID="8693700b583cf43b4f9f4939bc01ae3578eb84296db8bba81224f0b152b98a20" Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.201908 4792 scope.go:117] "RemoveContainer" containerID="219048cdfd393edb18e2b366bc917efb5accf65f9c299d768c6de8b0cf24eea4" Nov 27 19:04:21 crc kubenswrapper[4792]: E1127 19:04:21.202384 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"219048cdfd393edb18e2b366bc917efb5accf65f9c299d768c6de8b0cf24eea4\": container with ID starting with 219048cdfd393edb18e2b366bc917efb5accf65f9c299d768c6de8b0cf24eea4 not found: ID does not exist" containerID="219048cdfd393edb18e2b366bc917efb5accf65f9c299d768c6de8b0cf24eea4" Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.202472 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"219048cdfd393edb18e2b366bc917efb5accf65f9c299d768c6de8b0cf24eea4"} err="failed to get container status \"219048cdfd393edb18e2b366bc917efb5accf65f9c299d768c6de8b0cf24eea4\": rpc error: code = NotFound desc = could not find container \"219048cdfd393edb18e2b366bc917efb5accf65f9c299d768c6de8b0cf24eea4\": container with ID starting with 219048cdfd393edb18e2b366bc917efb5accf65f9c299d768c6de8b0cf24eea4 not found: ID does not exist" Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.202506 4792 scope.go:117] "RemoveContainer" containerID="efb944fe497d0d75fed3d17a806f06cb9c559a72860df7bc1d75b698707c5c1a" Nov 27 19:04:21 crc kubenswrapper[4792]: E1127 19:04:21.202815 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efb944fe497d0d75fed3d17a806f06cb9c559a72860df7bc1d75b698707c5c1a\": container with ID starting with efb944fe497d0d75fed3d17a806f06cb9c559a72860df7bc1d75b698707c5c1a not found: ID does not exist" containerID="efb944fe497d0d75fed3d17a806f06cb9c559a72860df7bc1d75b698707c5c1a" Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.202844 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efb944fe497d0d75fed3d17a806f06cb9c559a72860df7bc1d75b698707c5c1a"} err="failed to get container status \"efb944fe497d0d75fed3d17a806f06cb9c559a72860df7bc1d75b698707c5c1a\": rpc error: code = NotFound desc = could not find container \"efb944fe497d0d75fed3d17a806f06cb9c559a72860df7bc1d75b698707c5c1a\": container with ID starting with efb944fe497d0d75fed3d17a806f06cb9c559a72860df7bc1d75b698707c5c1a not found: ID does not exist" Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.202866 4792 scope.go:117] "RemoveContainer" containerID="8693700b583cf43b4f9f4939bc01ae3578eb84296db8bba81224f0b152b98a20" Nov 27 19:04:21 crc kubenswrapper[4792]: E1127 19:04:21.203146 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8693700b583cf43b4f9f4939bc01ae3578eb84296db8bba81224f0b152b98a20\": container with ID starting with 8693700b583cf43b4f9f4939bc01ae3578eb84296db8bba81224f0b152b98a20 not found: ID does not exist" containerID="8693700b583cf43b4f9f4939bc01ae3578eb84296db8bba81224f0b152b98a20" Nov 27 19:04:21 crc kubenswrapper[4792]: I1127 19:04:21.203193 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8693700b583cf43b4f9f4939bc01ae3578eb84296db8bba81224f0b152b98a20"} err="failed to get container status \"8693700b583cf43b4f9f4939bc01ae3578eb84296db8bba81224f0b152b98a20\": rpc error: code = NotFound desc = could not find container \"8693700b583cf43b4f9f4939bc01ae3578eb84296db8bba81224f0b152b98a20\": container with ID starting with 8693700b583cf43b4f9f4939bc01ae3578eb84296db8bba81224f0b152b98a20 not found: ID does not exist" Nov 27 19:04:22 crc kubenswrapper[4792]: I1127 19:04:22.710931 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3deb96f-d16c-4554-abdc-e4bf834711d1" path="/var/lib/kubelet/pods/b3deb96f-d16c-4554-abdc-e4bf834711d1/volumes" Nov 27 19:04:34 crc kubenswrapper[4792]: I1127 19:04:34.442415 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5994f6989f-4s6cj_01e17fe3-0b99-4719-8a19-bdb45dabeaac/kube-rbac-proxy/0.log" Nov 27 19:04:34 crc kubenswrapper[4792]: I1127 19:04:34.533756 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5994f6989f-4s6cj_01e17fe3-0b99-4719-8a19-bdb45dabeaac/manager/0.log" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.648634 4792 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8hzqj"] Nov 27 19:05:28 crc kubenswrapper[4792]: E1127 19:05:28.649966 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3deb96f-d16c-4554-abdc-e4bf834711d1" containerName="extract-utilities" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.649983 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3deb96f-d16c-4554-abdc-e4bf834711d1" containerName="extract-utilities" Nov 27 19:05:28 crc kubenswrapper[4792]: E1127 19:05:28.650007 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3deb96f-d16c-4554-abdc-e4bf834711d1" containerName="extract-content" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.650016 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3deb96f-d16c-4554-abdc-e4bf834711d1" containerName="extract-content" Nov 27 19:05:28 crc kubenswrapper[4792]: E1127 19:05:28.650033 4792 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3deb96f-d16c-4554-abdc-e4bf834711d1" containerName="registry-server" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.650040 4792 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3deb96f-d16c-4554-abdc-e4bf834711d1" containerName="registry-server" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.650312 4792 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3deb96f-d16c-4554-abdc-e4bf834711d1" containerName="registry-server" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.652712 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.668589 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8hzqj"] Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.798206 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7fsv\" (UniqueName: \"kubernetes.io/projected/000e6846-1583-4076-a536-ece19a718ef2-kube-api-access-l7fsv\") pod \"community-operators-8hzqj\" (UID: \"000e6846-1583-4076-a536-ece19a718ef2\") " pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.799085 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000e6846-1583-4076-a536-ece19a718ef2-catalog-content\") pod \"community-operators-8hzqj\" (UID: \"000e6846-1583-4076-a536-ece19a718ef2\") " pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.799387 4792 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000e6846-1583-4076-a536-ece19a718ef2-utilities\") pod \"community-operators-8hzqj\" (UID: \"000e6846-1583-4076-a536-ece19a718ef2\") " pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.902157 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7fsv\" (UniqueName: \"kubernetes.io/projected/000e6846-1583-4076-a536-ece19a718ef2-kube-api-access-l7fsv\") pod \"community-operators-8hzqj\" (UID: \"000e6846-1583-4076-a536-ece19a718ef2\") " pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.902257 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000e6846-1583-4076-a536-ece19a718ef2-catalog-content\") pod \"community-operators-8hzqj\" (UID: \"000e6846-1583-4076-a536-ece19a718ef2\") " pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.902318 4792 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000e6846-1583-4076-a536-ece19a718ef2-utilities\") pod \"community-operators-8hzqj\" (UID: \"000e6846-1583-4076-a536-ece19a718ef2\") " pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.902985 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000e6846-1583-4076-a536-ece19a718ef2-catalog-content\") pod \"community-operators-8hzqj\" (UID: \"000e6846-1583-4076-a536-ece19a718ef2\") " pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.903011 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000e6846-1583-4076-a536-ece19a718ef2-utilities\") pod \"community-operators-8hzqj\" (UID: \"000e6846-1583-4076-a536-ece19a718ef2\") " pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.925635 4792 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7fsv\" (UniqueName: \"kubernetes.io/projected/000e6846-1583-4076-a536-ece19a718ef2-kube-api-access-l7fsv\") pod \"community-operators-8hzqj\" (UID: \"000e6846-1583-4076-a536-ece19a718ef2\") " pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:28 crc kubenswrapper[4792]: I1127 19:05:28.983121 4792 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:29 crc kubenswrapper[4792]: I1127 19:05:29.681070 4792 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8hzqj"] Nov 27 19:05:30 crc kubenswrapper[4792]: I1127 19:05:30.585812 4792 generic.go:334] "Generic (PLEG): container finished" podID="000e6846-1583-4076-a536-ece19a718ef2" containerID="56be127368389ca61a289ee805ed3fee53e0cdb450de710d618287499347413c" exitCode=0 Nov 27 19:05:30 crc kubenswrapper[4792]: I1127 19:05:30.585905 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hzqj" event={"ID":"000e6846-1583-4076-a536-ece19a718ef2","Type":"ContainerDied","Data":"56be127368389ca61a289ee805ed3fee53e0cdb450de710d618287499347413c"} Nov 27 19:05:30 crc kubenswrapper[4792]: I1127 19:05:30.586164 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hzqj" event={"ID":"000e6846-1583-4076-a536-ece19a718ef2","Type":"ContainerStarted","Data":"673a264879c87bf2d46d72a39749dd9bb0525f3073446afd6605de0bad13960c"} Nov 27 19:05:32 crc kubenswrapper[4792]: I1127 19:05:32.609979 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hzqj" event={"ID":"000e6846-1583-4076-a536-ece19a718ef2","Type":"ContainerStarted","Data":"04af9e64b264764e60b66772cda8c3eb8e3916d8f272650b98c2631c9d263d56"} Nov 27 19:05:33 crc kubenswrapper[4792]: I1127 19:05:33.622570 4792 generic.go:334] "Generic (PLEG): container finished" podID="000e6846-1583-4076-a536-ece19a718ef2" containerID="04af9e64b264764e60b66772cda8c3eb8e3916d8f272650b98c2631c9d263d56" exitCode=0 Nov 27 19:05:33 crc kubenswrapper[4792]: I1127 19:05:33.622682 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hzqj" event={"ID":"000e6846-1583-4076-a536-ece19a718ef2","Type":"ContainerDied","Data":"04af9e64b264764e60b66772cda8c3eb8e3916d8f272650b98c2631c9d263d56"} Nov 27 19:05:34 crc kubenswrapper[4792]: I1127 19:05:34.655121 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hzqj" event={"ID":"000e6846-1583-4076-a536-ece19a718ef2","Type":"ContainerStarted","Data":"fb05fb26a271a675e9fc3c8e4379882b41fa5ff8199611c7fd003be1d3914b64"} Nov 27 19:05:34 crc kubenswrapper[4792]: I1127 19:05:34.707183 4792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8hzqj" podStartSLOduration=3.002786619 podStartE2EDuration="6.707157868s" podCreationTimestamp="2025-11-27 19:05:28 +0000 UTC" firstStartedPulling="2025-11-27 19:05:30.587914339 +0000 UTC m=+6952.930740657" lastFinishedPulling="2025-11-27 19:05:34.292285588 +0000 UTC m=+6956.635111906" observedRunningTime="2025-11-27 19:05:34.686069374 +0000 UTC m=+6957.028895692" watchObservedRunningTime="2025-11-27 19:05:34.707157868 +0000 UTC m=+6957.049984186" Nov 27 19:05:38 crc kubenswrapper[4792]: I1127 19:05:38.984095 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:38 crc kubenswrapper[4792]: I1127 19:05:38.984743 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:39 crc kubenswrapper[4792]: I1127 19:05:39.071941 4792 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:39 crc kubenswrapper[4792]: I1127 19:05:39.775993 4792 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:39 crc kubenswrapper[4792]: I1127 19:05:39.832582 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8hzqj"] Nov 27 19:05:41 crc kubenswrapper[4792]: I1127 19:05:41.743863 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8hzqj" podUID="000e6846-1583-4076-a536-ece19a718ef2" containerName="registry-server" containerID="cri-o://fb05fb26a271a675e9fc3c8e4379882b41fa5ff8199611c7fd003be1d3914b64" gracePeriod=2 Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.301857 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.422831 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7fsv\" (UniqueName: \"kubernetes.io/projected/000e6846-1583-4076-a536-ece19a718ef2-kube-api-access-l7fsv\") pod \"000e6846-1583-4076-a536-ece19a718ef2\" (UID: \"000e6846-1583-4076-a536-ece19a718ef2\") " Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.423306 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000e6846-1583-4076-a536-ece19a718ef2-catalog-content\") pod \"000e6846-1583-4076-a536-ece19a718ef2\" (UID: \"000e6846-1583-4076-a536-ece19a718ef2\") " Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.423436 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000e6846-1583-4076-a536-ece19a718ef2-utilities\") pod \"000e6846-1583-4076-a536-ece19a718ef2\" (UID: \"000e6846-1583-4076-a536-ece19a718ef2\") " Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.424403 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/000e6846-1583-4076-a536-ece19a718ef2-utilities" (OuterVolumeSpecName: "utilities") pod "000e6846-1583-4076-a536-ece19a718ef2" (UID: "000e6846-1583-4076-a536-ece19a718ef2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.425514 4792 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/000e6846-1583-4076-a536-ece19a718ef2-utilities\") on node \"crc\" DevicePath \"\"" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.433057 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/000e6846-1583-4076-a536-ece19a718ef2-kube-api-access-l7fsv" (OuterVolumeSpecName: "kube-api-access-l7fsv") pod "000e6846-1583-4076-a536-ece19a718ef2" (UID: "000e6846-1583-4076-a536-ece19a718ef2"). InnerVolumeSpecName "kube-api-access-l7fsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.475167 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/000e6846-1583-4076-a536-ece19a718ef2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "000e6846-1583-4076-a536-ece19a718ef2" (UID: "000e6846-1583-4076-a536-ece19a718ef2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.530709 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7fsv\" (UniqueName: \"kubernetes.io/projected/000e6846-1583-4076-a536-ece19a718ef2-kube-api-access-l7fsv\") on node \"crc\" DevicePath \"\"" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.530754 4792 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/000e6846-1583-4076-a536-ece19a718ef2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.761226 4792 generic.go:334] "Generic (PLEG): container finished" podID="000e6846-1583-4076-a536-ece19a718ef2" containerID="fb05fb26a271a675e9fc3c8e4379882b41fa5ff8199611c7fd003be1d3914b64" exitCode=0 Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.761331 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hzqj" event={"ID":"000e6846-1583-4076-a536-ece19a718ef2","Type":"ContainerDied","Data":"fb05fb26a271a675e9fc3c8e4379882b41fa5ff8199611c7fd003be1d3914b64"} Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.761583 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8hzqj" event={"ID":"000e6846-1583-4076-a536-ece19a718ef2","Type":"ContainerDied","Data":"673a264879c87bf2d46d72a39749dd9bb0525f3073446afd6605de0bad13960c"} Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.761613 4792 scope.go:117] "RemoveContainer" containerID="fb05fb26a271a675e9fc3c8e4379882b41fa5ff8199611c7fd003be1d3914b64" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.761355 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8hzqj" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.791827 4792 scope.go:117] "RemoveContainer" containerID="04af9e64b264764e60b66772cda8c3eb8e3916d8f272650b98c2631c9d263d56" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.803190 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8hzqj"] Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.820988 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8hzqj"] Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.831838 4792 scope.go:117] "RemoveContainer" containerID="56be127368389ca61a289ee805ed3fee53e0cdb450de710d618287499347413c" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.911414 4792 scope.go:117] "RemoveContainer" containerID="fb05fb26a271a675e9fc3c8e4379882b41fa5ff8199611c7fd003be1d3914b64" Nov 27 19:05:42 crc kubenswrapper[4792]: E1127 19:05:42.911969 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb05fb26a271a675e9fc3c8e4379882b41fa5ff8199611c7fd003be1d3914b64\": container with ID starting with fb05fb26a271a675e9fc3c8e4379882b41fa5ff8199611c7fd003be1d3914b64 not found: ID does not exist" containerID="fb05fb26a271a675e9fc3c8e4379882b41fa5ff8199611c7fd003be1d3914b64" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.912007 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb05fb26a271a675e9fc3c8e4379882b41fa5ff8199611c7fd003be1d3914b64"} err="failed to get container status \"fb05fb26a271a675e9fc3c8e4379882b41fa5ff8199611c7fd003be1d3914b64\": rpc error: code = NotFound desc = could not find container \"fb05fb26a271a675e9fc3c8e4379882b41fa5ff8199611c7fd003be1d3914b64\": container with ID starting with fb05fb26a271a675e9fc3c8e4379882b41fa5ff8199611c7fd003be1d3914b64 not found: ID does not exist" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.912578 4792 scope.go:117] "RemoveContainer" containerID="04af9e64b264764e60b66772cda8c3eb8e3916d8f272650b98c2631c9d263d56" Nov 27 19:05:42 crc kubenswrapper[4792]: E1127 19:05:42.913159 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04af9e64b264764e60b66772cda8c3eb8e3916d8f272650b98c2631c9d263d56\": container with ID starting with 04af9e64b264764e60b66772cda8c3eb8e3916d8f272650b98c2631c9d263d56 not found: ID does not exist" containerID="04af9e64b264764e60b66772cda8c3eb8e3916d8f272650b98c2631c9d263d56" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.913190 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04af9e64b264764e60b66772cda8c3eb8e3916d8f272650b98c2631c9d263d56"} err="failed to get container status \"04af9e64b264764e60b66772cda8c3eb8e3916d8f272650b98c2631c9d263d56\": rpc error: code = NotFound desc = could not find container \"04af9e64b264764e60b66772cda8c3eb8e3916d8f272650b98c2631c9d263d56\": container with ID starting with 04af9e64b264764e60b66772cda8c3eb8e3916d8f272650b98c2631c9d263d56 not found: ID does not exist" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.913208 4792 scope.go:117] "RemoveContainer" containerID="56be127368389ca61a289ee805ed3fee53e0cdb450de710d618287499347413c" Nov 27 19:05:42 crc kubenswrapper[4792]: E1127 19:05:42.913552 4792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56be127368389ca61a289ee805ed3fee53e0cdb450de710d618287499347413c\": container with ID starting with 56be127368389ca61a289ee805ed3fee53e0cdb450de710d618287499347413c not found: ID does not exist" containerID="56be127368389ca61a289ee805ed3fee53e0cdb450de710d618287499347413c" Nov 27 19:05:42 crc kubenswrapper[4792]: I1127 19:05:42.913575 4792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56be127368389ca61a289ee805ed3fee53e0cdb450de710d618287499347413c"} err="failed to get container status \"56be127368389ca61a289ee805ed3fee53e0cdb450de710d618287499347413c\": rpc error: code = NotFound desc = could not find container \"56be127368389ca61a289ee805ed3fee53e0cdb450de710d618287499347413c\": container with ID starting with 56be127368389ca61a289ee805ed3fee53e0cdb450de710d618287499347413c not found: ID does not exist" Nov 27 19:05:44 crc kubenswrapper[4792]: I1127 19:05:44.699581 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="000e6846-1583-4076-a536-ece19a718ef2" path="/var/lib/kubelet/pods/000e6846-1583-4076-a536-ece19a718ef2/volumes" Nov 27 19:06:08 crc kubenswrapper[4792]: I1127 19:06:08.290987 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 19:06:08 crc kubenswrapper[4792]: I1127 19:06:08.291475 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 19:06:10 crc kubenswrapper[4792]: I1127 19:06:10.270409 4792 scope.go:117] "RemoveContainer" containerID="c044ecfe5603a45632abadb11eb7f3280bcc25fe2fb5ab845a150110b1ce1e2a" Nov 27 19:06:29 crc kubenswrapper[4792]: I1127 19:06:29.399008 4792 generic.go:334] "Generic (PLEG): container finished" podID="09b4a162-3614-4ae6-8ff2-05169fed8b06" containerID="6d1b46823b28d66e4abebcb6af05d70353a125206bbc131fd46c700d558b7d7f" exitCode=0 Nov 27 19:06:29 crc kubenswrapper[4792]: I1127 19:06:29.399569 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lmr6d/must-gather-6vcwp" event={"ID":"09b4a162-3614-4ae6-8ff2-05169fed8b06","Type":"ContainerDied","Data":"6d1b46823b28d66e4abebcb6af05d70353a125206bbc131fd46c700d558b7d7f"} Nov 27 19:06:29 crc kubenswrapper[4792]: I1127 19:06:29.400457 4792 scope.go:117] "RemoveContainer" containerID="6d1b46823b28d66e4abebcb6af05d70353a125206bbc131fd46c700d558b7d7f" Nov 27 19:06:29 crc kubenswrapper[4792]: I1127 19:06:29.673762 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lmr6d_must-gather-6vcwp_09b4a162-3614-4ae6-8ff2-05169fed8b06/gather/0.log" Nov 27 19:06:38 crc kubenswrapper[4792]: I1127 19:06:38.289944 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 19:06:38 crc kubenswrapper[4792]: I1127 19:06:38.290535 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 19:06:43 crc kubenswrapper[4792]: I1127 19:06:43.207448 4792 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lmr6d/must-gather-6vcwp"] Nov 27 19:06:43 crc kubenswrapper[4792]: I1127 19:06:43.208242 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-lmr6d/must-gather-6vcwp" podUID="09b4a162-3614-4ae6-8ff2-05169fed8b06" containerName="copy" containerID="cri-o://fe4c60f4728578aea1b35731758b4bcee76c8265304611f299427ce0e7c07ab9" gracePeriod=2 Nov 27 19:06:43 crc kubenswrapper[4792]: I1127 19:06:43.221695 4792 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lmr6d/must-gather-6vcwp"] Nov 27 19:06:43 crc kubenswrapper[4792]: I1127 19:06:43.574791 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lmr6d_must-gather-6vcwp_09b4a162-3614-4ae6-8ff2-05169fed8b06/copy/0.log" Nov 27 19:06:43 crc kubenswrapper[4792]: I1127 19:06:43.575493 4792 generic.go:334] "Generic (PLEG): container finished" podID="09b4a162-3614-4ae6-8ff2-05169fed8b06" containerID="fe4c60f4728578aea1b35731758b4bcee76c8265304611f299427ce0e7c07ab9" exitCode=143 Nov 27 19:06:43 crc kubenswrapper[4792]: I1127 19:06:43.762037 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lmr6d_must-gather-6vcwp_09b4a162-3614-4ae6-8ff2-05169fed8b06/copy/0.log" Nov 27 19:06:43 crc kubenswrapper[4792]: I1127 19:06:43.768250 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/must-gather-6vcwp" Nov 27 19:06:43 crc kubenswrapper[4792]: I1127 19:06:43.883456 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgjm8\" (UniqueName: \"kubernetes.io/projected/09b4a162-3614-4ae6-8ff2-05169fed8b06-kube-api-access-mgjm8\") pod \"09b4a162-3614-4ae6-8ff2-05169fed8b06\" (UID: \"09b4a162-3614-4ae6-8ff2-05169fed8b06\") " Nov 27 19:06:43 crc kubenswrapper[4792]: I1127 19:06:43.883963 4792 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/09b4a162-3614-4ae6-8ff2-05169fed8b06-must-gather-output\") pod \"09b4a162-3614-4ae6-8ff2-05169fed8b06\" (UID: \"09b4a162-3614-4ae6-8ff2-05169fed8b06\") " Nov 27 19:06:43 crc kubenswrapper[4792]: I1127 19:06:43.891885 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b4a162-3614-4ae6-8ff2-05169fed8b06-kube-api-access-mgjm8" (OuterVolumeSpecName: "kube-api-access-mgjm8") pod "09b4a162-3614-4ae6-8ff2-05169fed8b06" (UID: "09b4a162-3614-4ae6-8ff2-05169fed8b06"). InnerVolumeSpecName "kube-api-access-mgjm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 27 19:06:43 crc kubenswrapper[4792]: I1127 19:06:43.986530 4792 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgjm8\" (UniqueName: \"kubernetes.io/projected/09b4a162-3614-4ae6-8ff2-05169fed8b06-kube-api-access-mgjm8\") on node \"crc\" DevicePath \"\"" Nov 27 19:06:44 crc kubenswrapper[4792]: I1127 19:06:44.078719 4792 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09b4a162-3614-4ae6-8ff2-05169fed8b06-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "09b4a162-3614-4ae6-8ff2-05169fed8b06" (UID: "09b4a162-3614-4ae6-8ff2-05169fed8b06"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 27 19:06:44 crc kubenswrapper[4792]: I1127 19:06:44.088490 4792 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/09b4a162-3614-4ae6-8ff2-05169fed8b06-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 27 19:06:44 crc kubenswrapper[4792]: I1127 19:06:44.588881 4792 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lmr6d_must-gather-6vcwp_09b4a162-3614-4ae6-8ff2-05169fed8b06/copy/0.log" Nov 27 19:06:44 crc kubenswrapper[4792]: I1127 19:06:44.589564 4792 scope.go:117] "RemoveContainer" containerID="fe4c60f4728578aea1b35731758b4bcee76c8265304611f299427ce0e7c07ab9" Nov 27 19:06:44 crc kubenswrapper[4792]: I1127 19:06:44.589623 4792 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lmr6d/must-gather-6vcwp" Nov 27 19:06:44 crc kubenswrapper[4792]: I1127 19:06:44.619813 4792 scope.go:117] "RemoveContainer" containerID="6d1b46823b28d66e4abebcb6af05d70353a125206bbc131fd46c700d558b7d7f" Nov 27 19:06:44 crc kubenswrapper[4792]: I1127 19:06:44.704136 4792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09b4a162-3614-4ae6-8ff2-05169fed8b06" path="/var/lib/kubelet/pods/09b4a162-3614-4ae6-8ff2-05169fed8b06/volumes" Nov 27 19:07:08 crc kubenswrapper[4792]: I1127 19:07:08.290743 4792 patch_prober.go:28] interesting pod/machine-config-daemon-56bcx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 27 19:07:08 crc kubenswrapper[4792]: I1127 19:07:08.291607 4792 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 27 19:07:08 crc kubenswrapper[4792]: I1127 19:07:08.291676 4792 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" Nov 27 19:07:08 crc kubenswrapper[4792]: I1127 19:07:08.292865 4792 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17"} pod="openshift-machine-config-operator/machine-config-daemon-56bcx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 27 19:07:08 crc kubenswrapper[4792]: I1127 19:07:08.292940 4792 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerName="machine-config-daemon" containerID="cri-o://1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" gracePeriod=600 Nov 27 19:07:08 crc kubenswrapper[4792]: E1127 19:07:08.421224 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:07:08 crc kubenswrapper[4792]: I1127 19:07:08.902180 4792 generic.go:334] "Generic (PLEG): container finished" podID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" exitCode=0 Nov 27 19:07:08 crc kubenswrapper[4792]: I1127 19:07:08.902232 4792 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" event={"ID":"8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36","Type":"ContainerDied","Data":"1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17"} Nov 27 19:07:08 crc kubenswrapper[4792]: I1127 19:07:08.902563 4792 scope.go:117] "RemoveContainer" containerID="e7914bc2a1025b6f976e576c7aa0b7b22098e5a8a12f9f7496d8fdc76a25c346" Nov 27 19:07:08 crc kubenswrapper[4792]: I1127 19:07:08.903391 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:07:08 crc kubenswrapper[4792]: E1127 19:07:08.903781 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:07:10 crc kubenswrapper[4792]: I1127 19:07:10.368345 4792 scope.go:117] "RemoveContainer" containerID="7d2e8863305edd7ea7008a0a671a6a10d95a090f8c77d114eae974e5ac28bcf6" Nov 27 19:07:22 crc kubenswrapper[4792]: I1127 19:07:22.686861 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:07:22 crc kubenswrapper[4792]: E1127 19:07:22.687763 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:07:37 crc kubenswrapper[4792]: I1127 19:07:37.687862 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:07:37 crc kubenswrapper[4792]: E1127 19:07:37.688957 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:07:50 crc kubenswrapper[4792]: I1127 19:07:50.686933 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:07:50 crc kubenswrapper[4792]: E1127 19:07:50.687987 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:08:01 crc kubenswrapper[4792]: I1127 19:08:01.687028 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:08:01 crc kubenswrapper[4792]: E1127 19:08:01.687796 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:08:16 crc kubenswrapper[4792]: I1127 19:08:16.687798 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:08:16 crc kubenswrapper[4792]: E1127 19:08:16.688683 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:08:27 crc kubenswrapper[4792]: I1127 19:08:27.687699 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:08:27 crc kubenswrapper[4792]: E1127 19:08:27.688485 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:08:42 crc kubenswrapper[4792]: I1127 19:08:42.688028 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:08:42 crc kubenswrapper[4792]: E1127 19:08:42.688845 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:08:55 crc kubenswrapper[4792]: I1127 19:08:55.687075 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:08:55 crc kubenswrapper[4792]: E1127 19:08:55.687976 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:09:10 crc kubenswrapper[4792]: I1127 19:09:10.688223 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:09:10 crc kubenswrapper[4792]: E1127 19:09:10.689332 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:09:22 crc kubenswrapper[4792]: I1127 19:09:22.687856 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:09:22 crc kubenswrapper[4792]: E1127 19:09:22.688884 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:09:33 crc kubenswrapper[4792]: I1127 19:09:33.687902 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:09:33 crc kubenswrapper[4792]: E1127 19:09:33.688991 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:09:47 crc kubenswrapper[4792]: I1127 19:09:47.687234 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:09:47 crc kubenswrapper[4792]: E1127 19:09:47.688155 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:10:00 crc kubenswrapper[4792]: I1127 19:10:00.687679 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:10:00 crc kubenswrapper[4792]: E1127 19:10:00.688402 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:10:13 crc kubenswrapper[4792]: I1127 19:10:13.687140 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:10:13 crc kubenswrapper[4792]: E1127 19:10:13.687990 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:10:25 crc kubenswrapper[4792]: I1127 19:10:25.687460 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:10:25 crc kubenswrapper[4792]: E1127 19:10:25.689265 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36" Nov 27 19:10:37 crc kubenswrapper[4792]: I1127 19:10:37.687473 4792 scope.go:117] "RemoveContainer" containerID="1eae7009dacdeb109e95abc5e7f92c1b167f085c03773b5d4962607f3e265a17" Nov 27 19:10:37 crc kubenswrapper[4792]: E1127 19:10:37.688364 4792 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-56bcx_openshift-machine-config-operator(8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36)\"" pod="openshift-machine-config-operator/machine-config-daemon-56bcx" podUID="8e78d9b4-acb5-4fb4-b491-38c7c0a5ce36"